Month: May 2023

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Database-as-a-service (DBaaS) provider DataStax is releasing a new support service for its open-source based unified events processing engine, Kaskada, that is aimed at helping enterprises build real-time machine learning applications.
Dubbed LunaML, the new service will provide customers with “mission-critical support and offer options for incident response time as low as 15 minutes,” the company said, adding that enterprises will also have the ability to escalate issues to the core Kaskada engineering team for further review and troubleshooting.
The company is offering two packages for raising tickets by the name of LunaML Standard and LunaML Premium, which in turn promises a 4-hour and 1-hour response time respectively, the company said in a blog posted on Thursday.
Under the standard plan, enterprises can raise 18 tickets annually. The Premium plan offers the option to raise 52 tickets in one year. Plan pricing was not immediately available.
DataStax acquired Kaskada in January for an undisclosed amount with the intent of adding Kaskada’s abilities into its offerings, such as its serverless, NoSQL database-as-a-service AstraDB and Astra Streaming.
DataStax’s acquisition of Kaskada was based on expected demand for machine learning applications.
The company believes that Kaskada’s capabilities can solve challenges of cost and scaling around machine learning applications, as the technology is designed to process large amounts of event data that is either streamed or stored in databases, and its time-based capabilities can be used to create and update features for machine learning models based on sequences of events, or over time.

MMS • RSS
Global NoSQL Software Market Size, Analysis, Industry Trends, Top Suppliers and COVID …

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
New Jersey, United States – In a recently published report by Verified Market Research, titled, “Global NoSQL Software Market Report 2030“, the analysts have provided an in-depth overview of Global NoSQL Software Market. The report is an all-inclusive research study of the Global NoSQL Software market taking into account the growth factors, recent trends, developments, opportunities, and competitive landscape. The market analysts and researchers have done extensive analysis of the Global NoSQL Software market with the help of research methodologies such as PESTLE and Porter’s Five Forces analysis. They have provided accurate and reliable market data and useful recommendations with an aim to help the players gain an insight into the overall present and future market scenario. The report comprises in-depth study of the potential segments including product type, application, and end user and their contribution to the overall market size.
In addition, market revenues based on region and country are provided in the report. The authors of the report have also shed light on the common business tactics adopted by players. The leading players of the Global NoSQL Software market and their complete profiles are included in the report. Besides that, investment opportunities, recommendations, and trends that are trending at present in the Global NoSQL Software market are mapped by the report. With the help of this report, the key players of the Global NoSQL Software market will be able to make sound decisions and plan their strategies accordingly to stay ahead of the curve.
Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=153255
Key Players Mentioned in the Global NoSQL Software Market Research Report:
Amazon, Couchbase, MongoDB Inc., Microsoft, Marklogic, OrientDB, ArangoDB, Redis, CouchDB, DataStax.
Key companies operating in the Global NoSQL Software market are also comprehensively studied in the report. The Global NoSQL Software report offers definite understanding into the vendor landscape and development plans, which are likely to take place in the coming future. This report as a whole will act as an effective tool for the market players to understand the competitive scenario in the Global NoSQL Software market and accordingly plan their strategic activities.
Global NoSQL Software Market Segmentation:
NoSQL Software Market, By Type
• Document Databases
• Key-vale Databases
• Wide-column Store
• Graph Databases
• Others
NoSQL Market, By Application
• Social Networking
• Web Applications
• E-Commerce
• Data Analytics
• Data Storage
• Others
Competitive landscape is a critical aspect every key player needs to be familiar with. The report throws light on the competitive scenario of the Global NoSQL Software market to know the competition at both the domestic and global levels. Market experts have also offered the outline of every leading player of the Global NoSQL Software market, considering the key aspects such as areas of operation, production, and product portfolio. Additionally, companies in the report are studied based on the key factors such as company size, market share, market growth, revenue, production volume, and profits. This research report is aimed at equipping readers with the all necessary information that will help them operate efficiently across the global spectrum of the market and derive fruitful results.
The report has been segregated based on distinct categories, such as product type, application, end user, and region. Each and every segment is evaluated on the basis of CAGR, share, and growth potential. In the regional analysis, the report highlights the prospective region, which is estimated to generate opportunities in the Global NoSQL Software market in the forthcoming years. This segmental analysis will surely turn out to be a useful tool for the readers, stakeholders, and market participants to get a complete picture of the Global NoSQL Software market and its potential to grow in the years to come. The key regions covered in the report are North America, Europe, Asia Pacific, the Middle East and Africa, South Asia, Latin America, Central and South America, and others. The Global NoSQL Software report offers an in-depth assessment of the growth rate of these regions and comprehensive review of the countries that will be leading the regional growth.
Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=153255
What to Expect in Our Report?
(1) A complete section of the Global NoSQL Software market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.
(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Software market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.
(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Software market.
(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Software market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.
(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Software Market report.
Key Questions Answered in the Report:
(1) What are the growth opportunities for the new entrants in the Global NoSQL Software industry?
(2) Who are the leading players functioning in the Global NoSQL Software marketplace?
(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Software industry?
(4) What is the competitive situation in the Global NoSQL Software market?
(5) What are the emerging trends that may influence the Global NoSQL Software market growth?
(6) Which product type segment will exhibit high CAGR in future?
(7) Which application segment will grab a handsome share in the Global NoSQL Software industry?
(8) Which region is lucrative for the manufacturers?
For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-software-market/
About Us: Verified Market Research®
Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.
Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.
We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.
Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.
Contact us:
Mr. Edwyne Fernandes
Verified Market Research®
US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768
Email: sales@verifiedmarketresearch.com
Website:- https://www.verifiedmarketresearch.com/
NoSQL Databases Software Market 2031 Insights with Key Innovations Analysis – Fylladey

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Mr Accuracy Reports recently introduced a new title on NoSQL Databases Software Market 2023 and forecast 2031 from its database. The NoSQL Databases Software report provides a study with an in-depth overview, describing the product/industry scope and elaborates market outlook and status (2023-2031). The NoSQL Databases Software report is curated after in-depth research and analysis by experts. The NoSQL Databases Software report provides comprehensive valuable insights on the global NoSQL Databases Software market development activities demonstrated by industry players, growth opportunities, and market sizing with analysis by key segments, leading and emerging players, and geographies.
Following are the key-players covered in the report: – MongoDB, Amazon, ArangoDB, Azure Cosmos DB, Couchbase, MarkLogic, RethinkDB, CouchDB, SQL-RD, OrientDB, RavenDB, Redis
Get a free sample copy of the NoSQL Databases Software report: – https://www.mraccuracyreports.com/report-sample/204170
The NoSQL Databases Software report contains a methodical explanation of current NoSQL Databases Software market trends to assist the users to entail an in-depth market analysis. The study helps in identifying and tracking emerging players in the global NoSQL Databases Software market and their portfolios, to enhance decision-making capabilities. Market basic factors coated during this report embrace a market summary, definitions, and classifications, and business chain summary. The report predicts future market orientation for the forecast amount from 2022 to 2031 with the help of past and current market values.
NoSQL Databases Software Report Objectives:
- To examine the global NoSQL Databases Software market size by value and size.
- To calculate the NoSQL Databases Software market segments, consumption, and other dynamic factors of the various units of the market.
- To determine the key dynamics of the NoSQL Databases Software market.
- To highpoint key trends in the NoSQL Databases Software market in terms of manufacturing, revenue, and sales.
- To summarize the top players of the NoSQL Databases SoftwareX industry
- To showcase the performance of different regions and countries in the global NoSQL Databases Software market.
Global NoSQL Databases Software Market Segmentation:
Market Segmentation: By Type
Cloud Based, Web Based
Market Segmentation: By Application
Large Enterprises, SMEs
The NoSQL Databases Software report encompasses a comprehensive assessment of different strategies like mergers & acquisitions, product developments, and research & developments adopted by prominent market leaders to stay at the forefront in the global NoSQL Databases Software market. The research detects the most important aspects like drivers, restraints, on business development patterns, scope, qualities, shortcomings, openings, and dangers employing a SWOT examination.
FLAT30% DISCOUNT TO BUY FULL STUDY:- https://www.mraccuracyreports.com/check-discount/204170
The NoSQL Databases Software market can be divided into:
North America (U.S. , Canada, Mexico), Europe (Germany, France, U.K., Italy, Spain, Rest of the Europe), Asia-Pacific (China, Japan India, Rest of APAC), South America (Brazil and Rest of South America), Middle East and Africa (UAE, South Africa, Rest of MEA).
The recent flows and therefore the growth opportunities within the market in the approaching amount are highlighted. Major players/suppliers worldwide and market share by regions, with company and product introduction, position in the global NoSQL Databases Software market including their market status and development trend by types and applications which will provide its price and profit status, and marketing status & market growth drivers and challenges. This latest report provides worldwide NoSQL Databases Software market predictions for the forthcoming years.
Direct Purchase this Market Research Report Now @ https://www.mraccuracyreports.com/checkout/204170
If you have any special requirements, please contact our sales professional (sales@mraccuracyreports.com), No additional cost will be required to pay for limited additional research. we are going to make sure you get the report that works for your desires
Thank you for taking the time to read our article…!!
ABOUT US:
Mr Accuracy Reports is an ESOMAR-certified business consulting & market research firm, a member of the Greater New York Chamber of Commerce and is headquartered in Canada. A recipient of Clutch Leaders Award 2022 on account high client score (4.9/5), we have been collaborating with global enterprises in their business transformation journey and helping them deliver on their business ambitions. 90% of the largest Forbes 1000 enterprises are our clients. We serve global clients across all leading & niche market segments across all major industries.
Mr Accuracy Reports is a global front-runner in the research industry, offering customers contextual and data-driven research services. Customers are supported in creating business plans and attaining long-term success in their respective marketplaces by the organization. The industry provides consulting services, Mr Accuracy Reports research studies, and customized research reports.

MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ

Three toxic behaviors that open-source maintainers experience are entitlement, people venting their frustration, and outright attacks. Growing a thick skin and ignoring the behavior can lead to a negative spiral of angriness and sadness. Instead, we should call out the behavior and remind people that open source means collaboration and cooperation.
Gina Häußge spoke about dealing with toxic people as an open-source maintainer at OOP 2023 Digital.
There are three toxic behaviors that maintainers experience all the time, Häußge mentioned. The most common one is entitlement. Quite a number of users out there are of the opinion that because you already gave them something, you owe them even more, and will become outright aggressive when you don’t meet their demands.
Then there are people venting their frustration at something not working the way they expect, Häußge said, who then can become abusive in the process.
The third toxic behavior is attacks, mostly from people who either don’t see their entitled demands met or who can’t cope with their own frustration, sometimes from trolls, as Häußge explained:
That has reached from expletives to suggestions to end my own life.
Häußge mentioned that she tried to deal with toxic behavior by growing a thick skin and, ignoring the behavior. She thought that getting worked up over them was a personal flaw of her. It turned out that she was trying to ignore human nature and the stress response cycle, as Häußge explained:
Trying to ignore things just meant they’d circle endlessly in my head, often for days, sometimes for weeks, and make me spiral into being angrier and angrier, or sadder and sadder. And that in turn would influence the way I communicate, often only escalating things further, or causing issues elsewhere.
Häußge mentioned that when she’s faced with entitlement or venting, she often reminds people of the realities at work. “Open Source means collaboration and cooperation, not demands,” she said. If people want to see something implemented, they should help getting it done – with code, but also things like documentation and bug analysis:
Anything I don’t have to do myself means more time for coding work to solve other people’s problems.
It shouldn’t just fall to the maintainer, Häußge said. We all can identify bad behavior when we see it and can call it out as such. We don’t have to leave it to the maintainers to also constantly defend their own boundaries or take abuse silently, she stated.
Häußge mentioned to also always look at ourselves in the mirror and make sure we don’t become offenders ourselves:
Remember the human on the other end at all times.
InfoQ interviewed Gina Häußge about toxic behavior towards open-source maintainers.
InfoQ: What impact of toxic behavior on both maintainers and OSS communities have you observed?
Gina Häußge: Over the years I’ve talked to a bunch of fellow OSS maintainers, and the general consensus also reflects my own experience: these experiences can ruin your day, they can ruin your whole week, and sometimes they make you seriously question why you even continue to maintain a project. They certainly contribute to maintainer burnout, and thus pose a risk to the project as a whole. It’s death by a thousand papercuts. And they turn whole communities toxic when left standing unopposed.
InfoQ: How have you learned to cope with toxic behavior?
Häußge: A solution for the stress response cycle is physical activity. I have a punching bag in my office, and even just 30 seconds on this thing get my heart going! This signals to my brain that I have acknowledged the threat and am doing something against it, completing the stress response cycle. Once I’ve done that, I’m in control again and can take appropriate next steps.
If things get abusive, I make it clear that behavior such as what they just demonstrated won’t be tolerated. This has gotten me a surprising number of apologies over the years, but sometimes it has also led to further escalation. In that case, I show people the door and if push comes to shove ban them.
InfoQ: What can we do in open-source projects to handle toxic behavior?
Häußge: The general mantra in Open Source used to be that as a maintainer you just need to grow a thick skin, ignore the haters – and if you can’t then you are simply not cut out for the job.
I disagree with this. The constant onslaught of this kind of treatment either will break you or turn you into a worse person, and neither should be something you just have to accept for wanting to maintain OSS. Enforce your own boundaries and your project’s CoC, and demand to be treated with human decency.

MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
AWS recently announced a new feature Provisioned Capacity for Athena, that allows users to run SQL queries on fully-managed compute capacity for a fixed price and no long-term commitments.
Athena is a serverless interactive query service that allows users to analyze data in Amazon Simple Storage Service (Amazon S3) data lakes and 30 different data sources, including on-premises data sources or other cloud systems, using standard SQL queries. Provisioned Capacity is an optional add-on feature of Athena.
With the added Provisioned Capacity feature, users can now pre-purchase query processing capacity for a set duration and choose the number of concurrent queries they want to run – allowing them to manage their query performance and costs more effectively, especially for critical workloads that require consistent and predictable query performance.
Sébastien Stormacq, a principal developer advocate at AWS, explains the Provisioned Capacity in an AWS News blog post:
Behind the scenes, Athena maintains a large pool of compute in each AWS Region that it operates in. You can think of this as one large pool of compute, divided logically across customers. When you reserve capacity in Athena, the capacity is held for your exclusive use. You can choose which queries run on the capacity you provisioned and which run on Athena’s multi-tenant, on-demand capacity. Multiple queries can share the capacity you provisioned.
Users can increase their capacity units anytime to meet their needs or reduce their provisioned capacity after at least eight hours.
The capacity units are based upon the so-called Data Processing Unit (DPU), with a single unit representing four vCPU and 16 Gb RAM. The minimum capacity users may provision is 24 DPU for eight hours, ideally, according to Stormacq, when the spend is $100 or more per month on Athena.
By reserving capacity in advance, users can avoid queuing delays, prioritize queries, and gain more predictable query performance. The company provides guidelines to determine how much capacity users might require.
Through the Athena console, AWS SDK, or CLI, users can set the capacity for their account and select the workgroups whose queries they want to use the capacity. A workgroup is an Athena mechanism that allows users to separate users, teams, applications, or workloads to set limits on the amount of data each query or the entire workgroup can process and track costs.
Queries associated with the designated workgroup will execute using the provisioned capacity. In addition, the capacity can be shared among several workgroups, provided they all utilize the same Athena engine version.
Source: https://aws.amazon.com/blogs/aws/introducing-athena-provisioned-capacity/
Other services similar to Athena are Google BigQuery, Microsoft Azure Synapse Analytics, Snowflake, and Apache Spark. Mustafa Akın, co-founder at Resmo and AWS Community Builder, stated in a tweet on the provisioned capacity in Athena:
Just use Snowflake if you need this
In contrast, Roni Burd, head of engineering (director) – EMR/Athena query engines, wrote in a LinkedIn post:
The new provisioned capacity model is great for customers who want larger scale and/or no-queue latencies and/or full control of the capacity while enjoying the same “just-works” serverless nature of Athena. It also makes it easier to reason about budget allocations, which are important for customer offering data lake queries to their own customers.
Currently, Athena Provisioned Capacity is available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Singapore, Sydney, Tokyo), and Europe (Ireland, Stockholm) AWS Regions. In addition, pricing details of Athena are available on the pricing page.

MMS • RSS

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

LLMs can be an effective way to generate structured data from semi-structured data, although an expensive one. A team of Stanford and Cornell researchers claim to have found a technique to reduce inference costs by 110x while improving inference quality.
According to Simran Arora, first author of the paper, using LLMs for inference on unstructured documents may get expensive as the corpus grows, with an estimated cost of at least $0.001 per 1K tokens. The strategy she and her colleagues at Stanford propose promises to reduce inference cost by 110 times using a sophisticated code synthesis tool dubbed EVAPORATE.
The basic task that EVAPORATE wants to solve can be described in the following terms: starting from heterogeneous documents, such as HTML files, PDFs, text files, and so on), identify a suitable schema, and extract data to populate a table. Often, traditional approaches to extract structured data from semi-structured data relies on a number of simplifying assumptions, for example regarding the position of tags in HTML documents or the existence of annotations, which necessarily end up reducing the generality of the system. EVAPORATE aims to maintain generality leveraging large language models.
In their paper, the researchers explore two alternative ways to extract data: using an LLM to extract values from the documents and build a tabular representation of the data; or synthesizing code that is later used to process the documents at large scale. The two approaches have different trade-offs in terms of cost and quality. Indeed, while the direct approach shows a very good performance in comparison to traditional techniques, it is very expensive.
LLMs are optimized for interactive, human-in-the-loop applications (e.g. ChatGPT), not high-throughput data processing tasks. The number of tokens processed by an LLM in EVAPORATE-DIRECT grows linearly with the size of the data lake.
On the other hand, the code approach, dubbed EVAPORATE-CODE, uses the LLM only on a small subset of the documents to generate a schema and synthesize a number of functions in a traditional programming language, e.g., Python, to extract the data from the whole set of documents. This approach is clearly less expensive than the former, but synthesized functions tend to be of varying quality, which affect the quality of the output table.
To strike a better balance between quality and cost, the researchers added a new element to their recipe: generating many candidate functions and estimating their quality. The results produced by those functions are then aggregated using weak supervision. This solution helps reducing the variability across generated functions, especially for those working only for a specific subset of documents, as well as the impact of those presenting any syntactic or logical errors.
Based on their evaluation of 16 sets of documents across a range of format, topics, and attribute types, the researchers say the extended approach, named EVAPORATE-CODE+, outperforms the state-of-the-art systems, which make simplifying assumptions, and achieves a 110x reduction in inference cost in comparison to EVAPORATE-CODE.
Our findings demonstrate the promise of function synthesis as a way to mitigate cost when using LLMs. We study the problem of materializing a structured view of an unstructured dataset, but this insight may be applicable in a broader suite of data wrangling tasks.
According to the researchers, there are many opportunities to further develop their system, including the possibilities of generating functions that invoke other AI models, such as those available on Hugging Face or through OpenAI. Another dimension to explore, they say, is iterating function generation so that, in case a sub-optimal or incorrect function is generated, it is fed back to the LLM to generate an improved function.

MMS • RSS

MMS • Irina Scurtu Martin Thwaites Guilherme Ferreira Scott Hansel
Article originally posted on InfoQ. Visit InfoQ

Transcript
Losio: In this session, we are going to be chatting about moving .NET applications to the cloud. I would like just to clarify a couple words about the topic. As organizations are increasingly moving towards cloud computing, there is of course a growing need as well for .NET application, the .NET world to be migrated to the cloud. We are going to discuss which tools, which services a .NET developer can use to be successful in building cloud native application. We’ll discuss benefits as well as challenges and mistakes made, and suggestions, whatever the different options because there’s not just one way to do it, whatever cloud provider you choose. You can use whatever we are using, managed Kubernetes services, serverless platform, EdgeDB based hosting option, the way you’re going to do it, we’ll see during this amazing panel. We now see how we can move .NET application to the cloud.
Background & Experience with .NET and Cloud Tech
My name is Renato Losio. I’m an editor here at InfoQ. I work as a principal cloud architect at Funambol. We are joined by four experts on .NET and cloud technology, coming from very different companies, different countries, different backgrounds. I would like to start giving each one of them the opportunity to introduce themselves, share their experience with .NET, and cloud technology.
Thwaites: I’m Martin Thwaites. I go by MartinDotNet on the Twitter’s, should give you an idea of where my focus has been for the last years on .NET. I’m a principal developer advocate for a company called Honeycomb, who provide observability type solutions. I work a lot on the OpenTelemetry .NET open source libraries.
Scurtu: My name is Irina Scurtu. I’m an independent consultant, Microsoft MVP, and speaker at various conferences. I’ve worked with .NET since I know myself, when I finished the computer science faculty, I leaned toward .NET because I hated Java at the moment. .NET was the alternative and C# was lovely to learn.
Ferreira: Feel free to call me Gui. I’ve been working with .NET since my first job. I have been in the cloud since 2012, I think so. Currently, I’m a developer advocate at FARFETCH and a content creator on YouTube, and all those kinds of things.
Hanselman: My name is Scott Hanselman. I’ve been programming now for 31 years. I have been doing .NET since its inception. Before I worked at Microsoft, I did large scale banking. I was basically putting retail banks online. I have experience in not just doing things on the web and in the cloud, but also doing it securely within the context of government requirements and things like that.
Major Pain Points of a .NET Developer, Dealing with the Cloud
Losio: Let’s start immediately with the challenges. What do you think is the major pain point today for a .NET developer, dealing with cloud technology, moving to the cloud?
Hanselman: I think that sometimes people move to the cloud in a very naive way. They think of the cloud as just hosting, at scale. I think that that’s a very simplistic or naive way to look at that. I use that word naive very specifically, because to be naive is a kind of ignorance, but it doesn’t indicate that it’s not something you can move beyond. You can teach yourself about these things. I feel a lot of people just lift and shift. What they’ll end up doing is they’ll pick up their .NET app, and they’ll move it over there. They’ll say, we’re in the cloud. Maybe they’ll get on a virtual machine, or maybe they’ll do platform as a service. That’s great. I think it is simplistic when there’s so much more elasticity and self-service that they could do. They also tend to spend too much money in the cloud. The amount of headroom that you need on a local machine that you paid for, and the amount of like extra CPU space, extra memory space, you can abuse the cloud, you can treat the cloud like a hotel room or an Airbnb, and you can leave it destroyed. Then let the people clean up after you.
Scurtu: I feel so many teams like using the cloud just because it’s cloud and it’s there and it should be used, and afterwards complaining about the invoices that come at the end. They didn’t know how to tweak things or use things just because they are shining, that they needed those.
Minimizing Lift and Shift During Migration
Losio: Scott raised the point of lift and shift. I was wondering if it’s really that people don’t know, or maybe people are even overwhelmed by the too many options to do one thing, or how many services now cloud providers can offer? What’s the first step a .NET developer should do, to not do, at this point, lift and shift? What do you recommend?
Thwaites: I honestly don’t think it’s .NET developer specific. This is about defining why you’re moving to the cloud. Is it because you want things to be cheaper? Is it because you would like things to be easier to scale? Do you want that elasticity? If you don’t define why you want to go, you can’t decide how you’re going to go. If you want elasticity, just dropping your stateful website that doesn’t support autoscaling in the cloud is not going to give you elasticity. Because you’re still going to have to have that stateful thing in the cloud and it won’t scale. You can maybe buy a bigger server quicker, but you’re still going to have downtime. If you don’t go into moving up, you can’t choose the right platform. Is it choosing App Service? Is it choosing the new Cloud Run stuff? All of that stuff is, I need to know why I’m trying to do it in order to be able to choose the right things.
There’s everything from Container Apps in Azure, and Fargate in AWS. You’ve got Functions. You’ve got Lambda. You’ve got all of those different things. Unless you know why you want to move to the cloud, you’ve got that edict that’s come down from the big people upstairs that say we need to move to the cloud. Like, “Great. Ok, I’ve moved to the cloud.” Why is it seven times more expensive? You didn’t specify that reducing cost was the reason why we’re going up there. If you don’t specify the why, then you are not going to get there. You might say, yes, we’ve got the rubber stamp, the tick box that says we’re in the cloud, but that’s it. I think that’s the biggest problem that people have at the moment. They think that just lifting and shifting an app that exists on-premise, or on that machine under Scott’s desk, that moving that up to the cloud doesn’t inherently make it more scalable, it doesn’t make it cheaper, it doesn’t do any of those things. It will make you tick the checkbox of in the cloud. If that’s the only thing you want to hit, do it.
Lift and Shift: First Step or Incremental Move to the Cloud?
Losio: I was thinking, we’re back really to the lift and shift topic. For example, if you see it more as a, no, very bad approach always, or it’s like, can it be a first step or an incremental move to the cloud? I have my monolith, I move it to the cloud. I start to break it in pieces, or maybe I prefer to start immediately with moving my .NET app to Functions or whatever.
Ferreira: It’s maybe a first step, because everyone remembers what was the first two months of the pandemic, and you have a service in-house, and everyone needs to access it now. How do you do it? If you have just a month to do something, it’s better than nothing. As Martin was saying, I can remember those moments where we were being sold that cloud was just about cost savings. I don’t recall a sales pitch on the potential of cloud. If you go with lift and shift, you can achieve some results. At least you should think about what you’ll be doing next. What are the next steps? There’s a strategy needed there, in my opinion.
The Next Step After a Lift and Shift Migration
Losio: Irina, I was thinking, as he just mentioned, you should think what the next step is. I was thinking, I’m a developer. I did my very first step. I got some credit from Microsoft, or some credits from AWS, I decided to lift and shift my stuff to the cloud. I have maybe a simple app bridging .NET with a SQL server, database, or whatever. Now, what’s the next step? Just wait until I run out of credits and then think, or how can I really go cloud native? What’s the next step for me that you recommend?
Scurtu: As an architect, I would say that it depends. Again, if you’re a developer trying to learn about cloud and what it is and why it should be used, then as a .NET developer, it’s very easy. You have everything there, minimal code changes, you’re up and running in cloud. If you’re trying to think that, I have a whole system that I want it in cloud, the story is, it’s way longer than it depends. It depends on the ulterior reason that you have. You want to be in cloud because it’s shiny, you want to modernize your app, or you want to actually achieve some business checkpoints, rather in costs, rather in scaling your app or serving your customers, or just making sure that you won’t lose business when there is high demand. You need the elasticity of the cloud.
Hanselman: I just think it’s lovely, though, because what we’re acknowledging is that software is meant to solve problems for humans and for businesses. If we go into those things, just looking at the tech for the tech’s sake, we’re going to miss the point. Everyone on the panel has so eloquently said that, why? Why does the cloud exist? Why are you moving to the cloud? Because you’ll see people work for a year for their Cloud Run migration strategy. They’ll go, what did you accomplish? I’ll say, it’s over there now. This one goes to 11.
Approaching .NET Development from Scratch (Serverless vs. Kubernetes)
Losio: Let’s say that I’m in a very lucky scenario, I start from scratch. I don’t have to migrate anything. Now let’s get a bit more in the, what can I really use? I have the world open in front of me, I can choose any cloud provider I want. I want to go to the cloud. How do I develop my app? I start with serverless because it’s cool, because it’s better. No? I start with Kubernetes. I start with whatever. How would I approach my .NET development from scratch?
Thwaites: The first step I would go with is Container Apps or Fargate. It’s middle of the road, which is the reason why I normally recommend it to people because you’ve got control over your stuff. You can run it very efficiently locally. You’re scaling very easily. In using containers, you’re essentially going to be building something that’s stateless, or that’s built to scale. You’re not having to worry about VMs. You’re not having to worry about a lot of things. You’re also not having to worry about hiring 17 Kubernetes administrators to fire up your AKS cluster and manage it, and scale it, and all that stuff. There’s been a while where people are going, “Kubernetes is the future. Everybody should be deploying on Kubernetes.” I don’t agree. I think managed container platforms are the future. Because I don’t want to care about Kubernetes, I want to say go and run my app. I would probably use App Service, if I’m on a .NET and Azure. There’s actually something that AWS have just released for the .NET stuff, which is very similar, which is, here’s my code, go run it. I don’t want to care about where it runs. I don’t want to care about VMs. I don’t want to care about a slider to add more VMs. I just want like, here’s my code, and go and run it.
I also don’t want those costs to scale exponentially, like you would get with serverless. There is too much to consider with serverless. To me, middle of the road is containers. That’s what I always recommend to people, run it in a container, run it on Container Apps, run it in Fargate. Those things are much easier to get started with. Then you can choose which direction do you go. Do I go with something that’s going to be charged by request or execution, or do I go with something where I have consistent scale, so I can actually get a lot more cost by buying VMs up front? If you go middle of the road, you’ve got both ways that you could go with it.
Losio: Gui, do you agree or will you go more on the Kubernetes way?
Ferreira: No, I will say that when you start, if we’re talking about Azure, the easiest way is that for sure you are a web developer, you are more than used to building web applications, deploying to App Services, it’s quite simple. You don’t need to learn a lot of things. It’s an entry point. Once you start getting comfortable about that, maybe you can start thinking about what types of things I can solve with serverless. Because learning serverless when you have been doing web development for your life, it’s a different beast. There’s different concerns that you need to think about. Kubernetes, you most likely don’t need it.
Managed Container Platforms
Losio: If I understand well, you’re all saying that as a single developer, or if you’re not running a huge, large-scale project, probably you’re going too early in Kubernetes. Did I get it right or wrong? Do you see any scenario where it makes sense to think almost immediately to move to Kubernetes?
Hanselman: A couple of years ago, maybe 5 or 10, IIS ran the world, then it was Apache. Then we went to sleep for a minute and we woke up on a Tuesday afternoon, and it was NGINX, and no one could find Apache anywhere. Then a couple of weeks went by, and then it was Kubernetes. It’s always going to be something. Kubernetes is great. It’s lovely. It’s the hotness right now. I have to agree with Martin that I don’t want to see all the knobs and the dials. I don’t want my web application to look like the dashboard of a 747. It’s too much. I think that managed container platforms of which Kubernetes is one, with orchestrators, of which Kubernetes is the first among equals, is the move. I think that we should probably spend some time thinking about what it is about .NET that is special and impressive, and then how it relates to containers.
As an example, I ran my blog and my podcast on Windows 2003 Server on IIS for 15 years. Then with the .NET Core wave and now .NET 5.6.7, compiled it on Linux, put it in a Docker container. Now I can put it anywhere I want to. I can run it on Kubernetes. I can run it on Linux. I can run it on WSL. I can run it on a Raspberry Pi. This is an 18-year-old .NET Windows application that by virtue of .NET’s ecosystem is now a cloud-based container application and happens to be running in App Service for containers on Azure, but could work in ACA or could work in AKS. That’s the magic in my mind of .NET. If I wanted to move it to Linode, I could do it tomorrow. It wouldn’t even have a moment of downtime.
When Azure App Service is not the Best Choice
Losio: Actually, I already heard three people mentioning the benefits of using App Service. Is there any scenario where any of you would recommend not to use, where probably going App Service is probably not the best choice for moving, for example, a .NET of course?
Hanselman: One example I like to give is actually now almost a 20 or 25-year-old example. I used to work at a company called 800.com. We had a deal where we sold three DVDs for a dollar. It was pre-Amazon online system. We had the shopping cart, and we had the product catalog. We all ran it on IIS. Imagine if you’re running it on App Service, and people are browsing, and people are buying stuff. Ninety-seven percent of people were basically browsing, and 3% were buying stuff. Then we said, three DVDs for a dollar to get the internet’s attention, and the internet lost their minds. Then suddenly, 3% of people were browsing, and 97% of people were literally trying to give us a dollar. Because the whole thing was one application in one place, we couldn’t just scale up the shopping cart. In this case, again, 20-plus years ago, we had to change DNS to have shoppingcart.800.com and products.800.com. Effectively partitioning node sets, and then change the scaling model. If you put everything in one single pile in App Service, you have limited scaling abilities, because you don’t have microservices, you don’t have individual things. You could have separate app services, where you’ve got Kubernetes, and you’re running things, you go, “Quick, turn the knob on the shopping cart side and turn down the knob on the product side,” or serverless, and then change the elasticity of those services. App Service would probably be a little more difficult if you had a more complicated architecture like that.
Patching and Security Updates in the Cloud
Losio: It sounds great just to run my code, but how about patching and security updates. I hear that question often about the move to the cloud. How do I think about patching and security updates when I move to the cloud?
Scurtu: The nice thing about cloud is that they just take care of that. Patching and security in your code, it’s your business, like using secure DLLs, libraries and everything related to the code itself. When it comes to infrastructure, whatever you’re using, except virtual machines, they will take care of that. They will remove the burden of you going in, updating, and having downtimes and so on, because basically someone else is running the things in your place. It’s nice. It removes the burden of manually getting into servers and applying updates. For example, when you’re running, you have your computer open and just starts updating. Yes, it’s not nice to happen on the server that’s on-premise.
Thwaites: We talked about running containers. I think one of the things that people miss is your container is your responsibility, not just your code, when you’re doing containers. When you’re doing App Service, which is the reason why I want, here’s my code, just run it, because somebody else manages both the container runtime and what’s installed inside my container, and updating the base images, and all of that stuff. App Service is great, because I can just say, here’s my code, or Functions is great, just give me my code, and you can go run it. If you’re running .NET in the container, you’re going to have to go up a little bit further. You’re going to have to say, I need something that’s going to manage the security of my containers, and know the operating system they’re running on. If I’m running a Debian container, or even if I’m running Alpine, there are security vulnerabilities that are in there. Do keep that in mind, because you’ve still got to do that if you want to run containers. You’re not completely out of the whole thing.
What to be Aware of, When Architecting and Designing for Cloud Native
Losio: When we’re architecting and designing for cloud native, what is a concern for us? What do developers need to be aware of? Basically, how do we keep the 747 cockpit that we mentioned before, hidden from the devs? Any advice?
Ferreira: I have worked at least in two companies where due to the size and all those “Netflix Problems,” we will either run on Kubernetes or in AKS as well. One common problem that I’ve seen is that when you give access to everything, and all the responsibilities are for developers, they will have extra concerns on the day-to-day job. They have more stuff to learn. You will demand more from them. Usually that creates some problems, because not everyone likes to do that Ops part of the job. It’s always a tradeoff. With great power comes great responsibility. That’s the way I see it. Because on those organizations, I’ve always seen by the end, the architecture team, trying to create abstractions on top of those platforms. If you are creating those abstractions, maybe they already exist on the cloud platform that you choose.
Thwaites: I wanted to just bring in the new word that people are going on about, is platform engineering. The idea of the 747 cockpit is what platform engineering build. They build your own internal abstraction on top of Azure. The Kubernetes stuff, they will build their own dashboards, or they’ll build their own cockpits. That is useful to them. That 747 cockpit is what they’re built to look at. That’s their tool. Yes, you hide that away from them but developers need to care about more. They need to care about how they deploy. They need to care about where it’s deployed. They need to care about scale. Don’t abstract too much away from them. It’s a balance. There’s no single answer.
Azure AKS vs. Azure App Service
Losio: How do you decide which service you should use between Azure AKS and Azure App Service? Which of the two domains?
Hanselman: I have a whole talk on this. In the talk I use this analogy that I really like, which is well known, it was called pizza as a service. You’re having a party, and you’re going to have some pizza. One option is you have to bring your own fire, your own gas, your own stove, and friends, conversation, and the stack goes all the way up. Or you could just rent like the actual people, and you could go to a place, they have the pizza. They provide the party room. They could even provide pretend actors to pretend to be your friends. Pizza as a service goes all the way out to software as a service. I could write a fake version of Microsoft Word and run it in a virtual machine, or I could pay 5 bucks a month for Office 365 and everything in between.
To the question, how do you decide between App Service and AKS? You have to ask yourself, do I have an app that I want to scale in a traditional web form way which has knobs going horizontally? I want to have n number of instances, like in a web form, and I want to scale up, those two dimensions? Or do I want something that is more partitioned and chopped up? Where I’ve got my shopping cart and my tax microservice and my products, and all of the different things can scale multi-dimensionally. Do I have something that’s already architected like that? If you have an app right now, like I did, sitting on a machine under your desk, App Service. If you’re doing some Greenfield work, or you have something that’s maybe a little bit partitioned, and you already have container understanding, put it into a managed Kubernetes service. I think you’ll be a lot more successful. It gives you more choice. You can do that in multiple steps.
Reducing Future Maintenance Costs, for a Functions as a Service engineer
Losio: What are the best practices for a Functions as a Service engineer to reduce the future perpetual maintenance costs?
Ferreira: When we are talking about serverless, it’s not a problem of all in or nothing. It’s important to find the correct scopes for functions and for the rest. From my experience, there’s always a place for them, but it’s not always the place that maybe you’re thinking about. Not every single type of problem can be solved with Functions as a Service.
Scurtu: I’ve seen a project where they used to have like huge costs caused by functions, because they basically architected it that way. Each function costs a bit to run it. Each function generated an output, that output we used to trigger a few wheels in the system. At the end of the month, the cost was huge. Most of those things could have been replaced with simple smaller components, just to reduce the costs. In the end, if you’re not doing flowers and painting things on the walls, you might have very predictable costs even with Azure Functions. It depends what you get after you run the function. That might be the actual issue.
Thwaites: I read something recently, where they were talking about the, yes, your Azure function infrastructure looks a mess, because it’s this one column, this one, this one column, this one. However, that’s a much more honest analysis of your system, than doing it in a monolith and saying, there’s one big thing going in, and one big thing going out. Because actually what’s happening is that’s the communication that’s happening between all of your individual functions in your application. People building these big nanoservice type infrastructures, not functions, the nanoservices, they do really not one domain, they do one little thing. It’s like 10 lines of code. When you then architect that entire thing out as Azure Functions, and you get your architects in and they build a big whiteboard thing, it’s a more honest description of what your system is actually doing. I liked that idea that actually, no, you should simplify the system. It’s like that is your system. It may look complicated, because it is.
Hanselman: I totally agree with everything that you just said. I have a good example also, and this is the difference between cloud ready and a cloud native. When you say cloud native, what does that really mean? People usually think cloud native means Kubernetes, but it really means that the app knows the cloud exists, and it knows all the things are available to it. Using Azure Friday, as an example, simple application, it’s an app service, it’s a container. It’s not rocket surgery. However, it has some background services that are doing work, and they’re 10 to 50 lines of code. The Azure Friday app knows that the cloud exists. When a file drops into Azure Storage, triggers fire off, 10 lines of code runs here, background Azure Functions go and run processing in the background. Then it provides microservices to allow for search and cataloguing of the 700 different shows that we have at azurefriday.com. In the old days, that might have been a background thread in asp.net, in a pipeline in App Service. Because I know the cloud’s available, and it’s pennies to run serverless, I have an application that is both a container app but also a cloud native app in that it knows that a serverless provider exists for it. That’s another example where you could move them into the cloud naively, then move to App Service. Then, to Martin’s point, say, this weird background thing that we used to run that way, that really belongs over in an Azure Function or a Lambda.
When Moving to the Cloud Makes no Sense
Losio: Basically, we started this discussion saying, we’re going to discuss how we’re going to move .NET application to the cloud, the different options, whatever. My 20-plus years old application on my server running under my desk, or whatever, my server in my office or wherever that server is. Do I need to move to the cloud? Do you see a scenario where it makes no sense to move to the cloud? If so, which one?
Ferreira: I can remember some cases where I’ve seen companies avoiding it and completely understand why. Usually, it’s because of security or compliance reasons. I have worked for a company where legally they couldn’t do it because that data according to the jurisdiction of their country couldn’t move to another place. For example, when we are in the European Union, it’s quite simple, but then when you start going to other countries, these kinds of problems may happen. On those cases, it doesn’t make sense at all. Besides that, I think that if you are not willing to do the investment of taking advantage of things that cloud can give you, maybe you can keep up taking care of your servers for a while. Because if you go into the lift and shift mode, the cost will be quite high. Then you will get into a point where everyone that took the decision of moving to the cloud will regret about that. If you don’t want to modernize the applications, I don’t see that going well in the long term.
Scurtu: Like COBOL, for example. COBOL, the programming language that appeared 64 years ago, it’s easier for systems that were written in COBOL, and it’s cheaper to just hire new people, train new people to continue to run them, not moving those to cloud. There will be a time, five years in the future, when these companies will have to be on cloud, just because maybe their code that is old becomes deprecated. The security things will not be supported anymore. I’m not seeing those businesses up and running in the future. They will encounter a problem, when modernization will become an issue, maybe not on cloud, per se, but just to modernize them in a way, whatever that means for them.
Is a .NET App in Azure Service Fabric, Running in the Cloud?
Losio: If I had a .NET app running under Azure Fabric, should it be considered running under cloud?
Hanselman: If it’s running in Azure, it’s in the cloud. It could be a virtual machine, it’s in the cloud. If it’s not under your desk, or in someone’s colocated hosting, it’s in the cloud. Yes, absolutely. Is a fabric or a mesh like that? Is it a cloud service? Absolutely. If it’s calling any Azure APIs directly, that’s absolutely considered running under the cloud, 100%.
Thwaites: I’d like to follow up on that and ask, why is there a question? It comes back to what we were saying about the goals of running under the cloud. Because that’s the sort of question that you get an exec asking, saying, are we running in the cloud? It’s like, we’ve got a VM in Azure, we’re in the cloud. Great, tick box on a tender document. Is that what the answer that Billy is looking for is, can I say to my execs that we’re running in the cloud?
Hanselman: I think that is what it is. They want to tell their boss, are we in the cloud? Yes. Again, you could be running a $7 Linux VM in Azure, it’s still running in the cloud.
Disadvantages of Serverless
Losio: Serverless is not all rosy, there is concern of time period of execution, or run to completion is not guaranteed. When we start breaking down too much, the architecture becomes too cluttered. I think this one doesn’t really refer to .NET only. It’s a common question about serverless deployment. If any one of you wants to address the point of disadvantage of serverless approaches.
Thwaites: It’s about how you design them. I think the way that you avoid a lot of this is by being conscious about your choices. Why is that a function? Is there a reason why you’ve made it a function over some of the other choices, some other cloud native choices? Is there a reason why you’ve decided that you want 10 lines of code and a function here, and 10 of them? I consulted for a bank recently, and they were writing nanoservices, literally every function, every HTTP endpoint that they’d created was an individual function app. Because, to Scott’s point earlier, they wanted to be able to scale them differently, and maybe migrate them to another App Service because they’re all fronted by APIM. Then you look at them and go, why did you make that decision? Because we wanted to be serverless, and everything should be serverless. I think it’s that decision making that’s the problem. Why should they do that? What is the reason that they’re choosing to do serverless? I think it comes down to that same question of cloud. It’s because I want to be able to say that I’m serverless. Serverless to me is about elasticity of scale, essentially, infinite scale. I know it’s not infinite-infinite. For most intents and purposes, it is. It’s also about the scalability of cost, which is why, say serverless databases like Cosmos or DynamoDB, where you pay by the request, those are to me what serverless is about, because if I don’t get any hits on my website, I don’t pay anything. If I get lots of hits on my website, then I do pay for things. I think, yes, this idea that serverless isn’t all rosy comes from too many people choosing serverless for a small function that should have probably just been five or six of them dropped into one container app.
Mainframe Apps in the Cloud
Losio: Someone followed up on Irina’s comment on COBOL, saying, there are some frameworks like COBOL .NET, if Microsoft is a big business, they could address them.
Hanselman: There’s a whole mainframe migration department. You can basically run mainframe applications in virtual machines in the cloud, which is really interesting. I actually did a couple episodes of Azure Friday on it, with the idea that you’ve got mainframes mid-range, and then like Solaris and VAX machines running emulated or otherwise, in the cloud. That’s really big business. It’s super interesting.
Factors to Consider when Choosing a Cloud Provider
Losio: How do you choose a provider apart from cost? Usually, from an engineer’s point of view, whatever, I might say you’re an engineer or an architect, I work for a company that often has already made a choice or is already on the cloud, has a deployment already. Usually, I deploy my .NET app wherever. Do you see any specific reason apart from cost that should be taken into account when you make the decision, if you have that chance to actually make a decision yourself. I keep hearing people talking about cost, but as I’m usually working on the cloud technologies, I often find it hard myself to predict cost. People keep saying, you choose one provider or you choose the other one according to cost of a specific serverless. Martin mentioned about serverless or mixed solution is not just a lift and shift. It’s not that easy to predict, if you consider this cluster, if you consider anything else. How do you make a choice? How do you do that first step?
Hanselman: How do you make the decision about what host to go with?
Losio: Yes.
Hanselman: If your needs are simple, and you need to spin up a container for 8 bucks in the cloud, you can do that anywhere. You can pick whatever host makes you happy. There are a lots of people that are ready to spin your $8 container up in the cloud. If you are building an application with some sophistication, and you have requirements, be those requirements, data sovereignty, and you need a cloud in Germany run by Germans, or if you have HIPAA, American health requirements for how your data is treated, then you’re going to want to look at the clouds, the certifications. If you are primarily a Linux house, you’re going to want to make sure that you’ve got the tooling that is available, and everything that runs on Linux. If you all run Ubuntu on the desktop, does Visual Studio Code have the extensions that you want to run on the cloud that you’re going to want to go to? For me, it’s not just the runtime aspect of Azure that makes me happy and why I stay on Azure. It’s the tooling. It’s the Azure command line, and the plugins for both Visual Studio and Visual Studio Code. You need to look at the holistic thing. If you just want to drive, buy a Honda, buy a Toyota, but people who are really thinking about their relationship with that car company are thinking about, where am I going to go for service when the Honda breaks down? How many different Honda dealers are there? Those kinds of things. It’s a little more holistic.
Monitoring a .NET App in the Cloud
Losio: I move my .NET app to the cloud. Next step, monitoring. How do I monitor it? What’s the best way to do it?
Thwaites: Monitoring is becoming more of a dev concern. The whole DevOps movement around bringing people together and allowing devs to care about a lot more of this stuff around how their app scale. That’s where OpenTelemetry comes in. It all comes down to the whole portability debate. Scott said, if you’ve got a container, you can run it anywhere. You’ve also got to think about your monitoring, how exactly you’re going to monitor this thing, make sure it’s up. Does your cloud provider have built-in monitoring? Are you going to choose an external vendor? That’s where OpenTelemetry comes in now, because it’s vendor agnostic, and everybody should be doing it. The otel.net stuff is stable. It is robust, and allows you to push to anywhere, from Azure Monitor, to X-Ray, to vendors like ours. That is, to me, where this future of how do we ensure that developers can see that their app is running. Because if you deploy your app to Azure, and nobody can hit it because it’s down, was there any point in deploying it to Azure in the first place? You need to know these things.
That, I think, is where as a .NET community, we’ve not done as much to enable people to think about this monitoring stuff. They’ve seen it as a concern, where they hand it over to somebody else, and somebody else will do monitoring and observability for them. I think we’re getting to a point now where everybody’s starting to care about it. It makes me really happy that everybody’s starting to care about this stuff. I want to know, tell me, is it running fast? Is it running slow? Is it doing the right things? How did this go through that application? How does it transition through the 17 Azure functions that I’ve written? All of that stuff is really important. I think we’re getting to a stage now where people care about it.
Public Cloud Portability
Losio: Could you share thoughts on public cloud portability, containers and Kubernetes are presenting the least common denominator. Basically, is the concern about moving towards managed service, is somehow offset by concern around platform lock-in. Gui, do you have any feedback when to avoid that?
Ferreira: The good thing is that Docker became so common sense in our industry, that even things like App Service will give you a way to run your Docker containers inside it. That’s the good news. Besides that, what I always say is that the cloud doesn’t remove the job of doing proper work on your code itself. Creating the right abstractions in case you need to move to a different thing, a different SDK, a different something, it’s always important. If you are concerned about that, for example, in things like functions, even in functions or serverless, there’s ways to abstract yourself from the platform where you are running.
Scurtu: Actually, the thing with vendor lock-in, I’ve seen it as a concern from the business people, saying, we have this product but we do not want to be locked-in in Azure or in AWS, we want at the moment in time to just move over. I’ve seen the concern, but I never seen it put in practice. Everyone wants just to be prepared for it, but no one does it ever. I think this is somehow a premature optimization. Like let’s do this, because maybe in the future, we’re going to switch the cloud provider. I’ve never seen it. Because major projects with vendor lock-in problems, I don’t think it’s real.
Thwaites: I’ve never heard anybody say that they changed database provider.
Hanselman: It’s very likely not the runtime that is keeping you there, it’s almost always going to be your data. You’re going to end up with a couple of terabytes in Azure Storage and you want to move them to S3, or you’re going to go with Cosmos and you want to move to Atlas or whatever. It’s not going to be your containers, it’s going to be your data.
Thwaites: I think you also miss out. You miss out on a lot of the optimizations that you get by developing specifically for Cosmos. You get to use a lot of things that are very specific to the way that Cosmos works. Its indexing systems are very different to Dynamo.
Hanselman: People want to put their finger on the chess piece and look around the board and not actually make the move.
Thwaites: You lose scale. It costs you more money, because you end up going lowest common denominator, as the question said. You go with the, what’s the API that both Dynamo and Cosmos support? It’s Mongo. We’ll go with Mongo, then. There’s a lot more that you can do with Cosmos, there’s also a lot more that you can do with Dynamo, and the way that they work, if you really lock yourself in.
Hanselman: Azure Arc is a really cool way that you can have Kubernetes running in multiple places, but then manage it through one pane of glass. I could have Kubernetes and AKS in the cloud, and I could have Kubernetes on my local Raspberry Pi Kubernetes cluster, but it would show up in Azure, which means that I could also have Kubernetes running in Google Cloud, or in AWS, but all manage it on one place, which is cool.
The Journey to .NET Apps in the Cloud
Losio: Thinking, from the point of view of an engineer that attended this roundtable, I did it, I enjoyed it. What can I do tomorrow? What’s your recommendation of one thing I can do tomorrow to start my journey?
Thwaites: Decide why. Decide why you want to go to the cloud. Set your objectives: cost, efficiency, scale, whatever it is. Set some values that you either want to go up or down. Then, you can make some decisions.
Scurtu: Just make an informed decision, look it up before just deciding on a thing or putting your finger on that.
Ferreira: Get comfortable with a platform. When I say to get comfortable is not to go deeper on it, but give me a first step. There’s a different feeling that comes from playing around with things, getting a notion of what it is. Doing a quick tutorial to see how you go from your code to the cloud, can spark a lot of ideas.
Hanselman: Get back to basics. Someone asked me what someone would want to learn in 2023. They thought I was going to say, take a class on this cloud or that cloud. I was actually going to say learn about DNS and HTTP. You would be surprised how many people I’ve seen with 5, 10 years of experience that can’t set up a TXT record, or a CNAME in DNS. Those things aren’t changing, so learn the basics. The cloud is implementation detail.
Thwaites: The problem is always DNS, and the answer is always DNS.
See more presentations with transcripts