New York is a tech startup hotbed after almost a decade-long run of IPOs – Globe Echo

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Olivier Pomel, co-founder and CEO of Datadog, speaks at the company’s Dash conference in San Francisco on Aug. 3, 2023.

Datadog

Albert Wang, a native Californian, moved to New York from Boston with his wife a decade ago and got a job as a product manager at Datadog, which at the time was a fledgling startup helping companies monitor their cloud servers and databases.

New York had its share of startup investors and venture-backed companies, but it wasn’t a hotbed of tech activity. The San Francisco Bay Area was the dominant tech scene. On the East Coast, Boston was better known as the hub of enterprise technology.

But Datadog grew up — fast — going public in 2019, and today it sports a market cap of over $28 billion. After four years at the company, Wang left but chose to stay in New York to launch Bearworks, providing software to sales reps. The city is totally different from the place he encountered when he arrived, and you can feel it when you’re out at a bar or restaurant, Wang said.

“Now it’s extremely diversified — there are more people doing startups,” he said. Before, “you tended to be surrounded by consultants and bankers, but more and more now, there’s tech.”

Datadog’s initial public offering was followed less than two years later by UiPath, which develops software for automating office tasks. They were both preceded by cloud database developer MongoDB in 2017 and e-commerce platform Etsy in 2015.

None of those Big Apple companies are huge by the tech industry’s standards — market caps range from $9 billion to just under $30 billion — but they’ve created an ecosystem that’s spawned many new startups and created enough wealth to turn some early employees into angel investors for the next generation of entrepreneurs.

While the tech industry is still trying to bounce back from a brutal 2022, which was the worst year for the Nasdaq since the 2008 financial crisis, New Yorkers are bullish on the city that never sleeps.

Among the 50 states, New York was second to California last year, with $29.2 billion invested in 2,048 startups, according to the National Venture Capital Association. Massachusetts was third. In 2014, prior to the run of New York City IPOs, California was the leader, followed by Massachusetts and then New York.

Annual capital deployed in New York over the past nine years has increased sevenfold, NVCA data shows. And that’s after last year’s steep industrywide slump. During the record fundraising year of 2021, New York startups received almost $50 billion across 1,935 companies.

California companies raised three times that amount, and the Bay Area has its own share of startup market momentum. Following the launch of ChatGPT in November from San Francisco’s OpenAI, the city has become a mecca for artificial intelligence development.

Investors have pumped over $60 billion into Bay Area startups so far this year, with half of the money flowing to AI companies, according to data from PitchBook.

Northern California has long been the heartbeat of the tech industry, but Murat Bicer remembers what it was like for New York startups before the rush. In 2012, his Boston-based firm, RTP Ventures, presented a term sheet for a funding round to Datadog but wanted one more investor to participate.

“We talked to so many firms,” said Bicer, who left RTP for venture firm CRV in 2015. “So many at the time passed because they didn’t think you could build an enterprise software company in New York. They said it had to be in Boston.”

That dynamic challenged Olivier Pomel, Datadog’s French co-founder and CEO, who had built up a local network after working in New York for a decade. Boston had the enterprise scene. The rest of tech was in Silicon Valley.

“VCs from the West Coast were not really investing outside the West Coast at the time,” Pomel said.

But Pomel was determined to build Datadog in New York. Eventually, Index Ventures, a firm that was founded in Europe, joined in the funding round for Datadog, giving the company the fuel to grow up in the city. Pomel relocated the company to The New York Times building off Manhattan’s Times Square.

For New York to keep the momentum, it will need to churn out a continuing string of successes. That won’t be easy. The IPO market has finally shown some signs of life over the past week after being shuttered for almost two years, but investor enthusiasm has been muted and there aren’t many obvious New York-based tech IPO candidates.

Startups proliferated in New York during the dot-com boom, but many disappeared in the 2000s. Datadog, MongoDB and cloud infrastructure provider DigitalOcean all popped up after the Great Recession. DigitalOcean went public in 2021 and now has a market cap of just over $2 billion.

Employees from those companies and even a few of their founders have formed new startups in New York. Google and Salesforce are among Big Tech employers that bolstered their presence in the city, making it easier for tech startups to find people with the right skills. And investors who for decades had prioritized the Bay Area have recently set up shop in New York.

Andreessen Horowitz, GGV Capital, Index and Lightspeed Venture Partners expanded their presence in the city in 2022. In July of this year, Silicon Valley’s most prized firm, Sequoia Capital, which was MongoDB’s largest venture investor, opened a New York office.

“Today, there’s absolutely no question in my mind that you can build fantastic businesses in New York,” said Bicer.

Eliot Horowitz, who co-founded MongoDB in 2007 and is now building a New York-based robotics software startup called Viam, shared that sentiment.

“The biggest difference between now and then is no one questions New York,” Horowitz said.

Horowitz is among a growing group of successful founders pumping some of their riches back into New York. He backed DeliverZero, a startup that allows people to order food in reusable containers that can be returned. The company is working with around 200 restaurants and some Whole Foods stores in New York, Colorado and California.

Eliot Horowitz, co-founder of Viam and formerly co-founder and chief technology officer of MongoDB, speaks at the Collision conference in Toronto on May 23, 2019.

Vaughn Ridley | Sportsfile | Getty Images

Wainer, a co-founder of DigitalOcean, invested in collaboration software startup Multiplayer alongside Bowery Capital. He’s also backed Vantage, a cloud cost-monitoring startup founded by ex-DigitalOcean employees Brooke McKim and Ben Schaechter. Vantage, with 30 employees, has hundreds of customers, including Block, Compass and PBS, Schaechter said.

Meanwhile, Wainer has moved to Florida, but he’s building his new company in New York. Along with fellow DigitalOcean co-founder Ben Uretsky, he started Welcome Homes, whose technology lets people design and order new homes online. The company has over $47 million worth of homes under construction, said Wainer, who visits Welcome’s headquarters every month or two.

Wainer said that companies like DigitalOcean, which had over 1,200 employees at the end of last year, have helped people gain skills in cloud software marketing, product management and other key areas in technology.

“The pool of talent has expanded,” he said.

That has simplified startup life for Edward Chiu, co-founder and CEO of Catalyst, whose software is designed to give companies a better read on their customers. When he ran customer success at DigitalOcean, Chiu said finding people with applicable experience wasn’t easy.

“That function, even just a decade ago, just wasn’t relevant in New York City,” Chiu said. “Nowadays, it is very easy to hire in New York City for any role, really.”

The ecosystem is rapidly maturing. When Steph Johnson, a former communications executive at DigitalOcean and MongoDB, got serious about raising money for Multiplayer, which she started with her husband, the couple called Graham Neray.

Investing in the next generation

Neray had been chief of staff to MongoDB CEO Dev Ittycheria and had left the company to start data-security startup Oso in New York. Neray told the Multiplayer founders that he would connect them with 20 investors.

“He did what he said he would do,” Johnson said, referring to Neray. “He helped us so much.” Johnson said she and her husband joked about naming their startup Graham because of how helpful he’d been.

To some degree, Neray was just paying his dues. To help establish Oso, Neray had looked for help from Datadog’s Pomel. He also asked Ittycheria for a connection.

Dev Ittycheria, CEO of MongoDB

Adam Jeffery | CNBC

“I have an incredible amount of respect for Oli and what he achieved,” Neray said, referring to Pomel. “He’s incredibly strong on both the product side and the go-to-market side, which is rare. He’s in New York, and he’s in infrastructure, and I thought that’s a person I want to learn from.”

Pomel ended up investing. So did Sequoia. Now the startup has over 50 clients, including Verizon and Wayfair.

Last year, MongoDB announced a venture fund. Pomel said he and other executives at Datadog have discussed following suit and establishing an investing arm.

“We want the ecosystem in which we hire to flourish, so we invest more around New York and France,” Pomel said.

Ittycheria has had a front-row seat to New York’s startup renaissance. He told CNBC in an email that when he founded server-automation company BladeLogic in 2001, he wanted to start it in New York but had to move it to the Boston area, “because New York lacked access to deep entrepreneurial talent.”

Then came MongoDB. By the time Ittycheria was named CEO of the database company in 2014, New York “was starting to see increasing venture activity, given the access to customers, talent and capital,” Ittycheria said. The company’s IPO three years later was a milestone, he added, because it was the city’s first infrastructure software company to go public.

The IPO, he said, showed the market that people can “build and scale deep tech companies in New York — not just in Silicon Valley.”

WATCH: MongoDB CEO Dev Ittycheria on Q2 results: Very pleased with how company is positioned for the future

MongoDB CEO Dev Ittycheria on Q2 results: Very pleased with how company is positioned for the future

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Paragon Advisors LLC Sells 13,310 Shares of MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Roth Financial Partners LLC acquired a new stake in MongoDB, Inc. (NASDAQ:MDBFree Report) in the second quarter, according to the company in its most recent disclosure with the Securities and Exchange Commission (SEC). The fund acquired 500 shares of the company’s stock, valued at approximately $205,000. MongoDB accounts for 0.1% of Roth Financial Partners LLC’s investment portfolio, making the stock its 22nd largest position.

A number of other hedge funds and other institutional investors have also made changes to their positions in the business. 1832 Asset Management L.P. grew its position in MongoDB by 3,283,771.0% during the fourth quarter. 1832 Asset Management L.P. now owns 1,018,000 shares of the company’s stock valued at $200,383,000 after acquiring an additional 1,017,969 shares during the period. Price T Rowe Associates Inc. MD grew its position in MongoDB by 13.4% during the first quarter. Price T Rowe Associates Inc. MD now owns 7,593,996 shares of the company’s stock valued at $1,770,313,000 after acquiring an additional 897,911 shares during the period. Renaissance Technologies LLC grew its position in MongoDB by 493.2% during the fourth quarter. Renaissance Technologies LLC now owns 918,200 shares of the company’s stock valued at $180,738,000 after acquiring an additional 763,400 shares during the period. Norges Bank purchased a new stake in MongoDB during the fourth quarter valued at about $147,735,000. Finally, Champlain Investment Partners LLC purchased a new stake in shares of MongoDB in the first quarter worth about $89,157,000. Hedge funds and other institutional investors own 88.89% of the company’s stock.

Insider Buying and Selling

In other MongoDB news, CAO Thomas Bull sold 516 shares of the stock in a transaction on Monday, July 3rd. The stock was sold at an average price of $406.78, for a total transaction of $209,898.48. Following the completion of the transaction, the chief accounting officer now directly owns 17,190 shares of the company’s stock, valued at approximately $6,992,548.20. The sale was disclosed in a filing with the SEC, which is accessible through the SEC website. In other MongoDB news, Director Hope F. Cochran sold 2,174 shares of the stock in a transaction on Friday, September 15th. The stock was sold at an average price of $361.31, for a total transaction of $785,487.94. Following the completion of the transaction, the director now directly owns 9,722 shares of the company’s stock, valued at approximately $3,512,655.82. The sale was disclosed in a filing with the SEC, which is accessible through the SEC website. Also, CAO Thomas Bull sold 516 shares of the stock in a transaction on Monday, July 3rd. The shares were sold at an average price of $406.78, for a total value of $209,898.48. Following the transaction, the chief accounting officer now directly owns 17,190 shares of the company’s stock, valued at $6,992,548.20. The disclosure for this sale can be found here. Insiders sold 104,694 shares of company stock valued at $41,820,161 in the last three months. 4.80% of the stock is currently owned by corporate insiders.

Wall Street Analysts Forecast Growth

MDB has been the topic of several recent analyst reports. Capital One Financial initiated coverage on MongoDB in a research note on Monday, June 26th. They issued an “equal weight” rating and a $396.00 price target on the stock. Oppenheimer upped their price objective on MongoDB from $430.00 to $480.00 and gave the company an “outperform” rating in a research note on Friday, September 1st. 22nd Century Group restated a “maintains” rating on shares of MongoDB in a research note on Monday, June 26th. Sanford C. Bernstein upped their price objective on MongoDB from $424.00 to $471.00 in a research note on Sunday, September 3rd. Finally, Guggenheim upped their price objective on MongoDB from $220.00 to $250.00 and gave the company a “sell” rating in a research note on Friday, September 1st. One analyst has rated the stock with a sell rating, three have given a hold rating and twenty-one have given a buy rating to the company. According to MarketBeat.com, MongoDB presently has an average rating of “Moderate Buy” and a consensus target price of $418.08.

Check Out Our Latest Report on MongoDB

MongoDB Stock Performance

MDB stock traded up $2.82 on Friday, reaching $336.44. The company’s stock had a trading volume of 498,795 shares, compared to its average volume of 1,660,096. MongoDB, Inc. has a 52 week low of $135.15 and a 52 week high of $439.00. The company has a market capitalization of $24.00 billion, a P/E ratio of -96.42 and a beta of 1.11. The firm has a fifty day moving average of $382.03 and a two-hundred day moving average of $319.28. The company has a debt-to-equity ratio of 1.29, a current ratio of 4.48 and a quick ratio of 4.48.

MongoDB Company Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Elon Musk's Next Move Cover

Wondering when you’ll finally be able to invest in SpaceX, StarLink or The Boring Company? Click the link below to learn when Elon Musk will let these companies finally IPO.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DVF Software Engineer 0059 – IT-Online – Head Topics

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

DVF Software Engineer ESSENTIAL SKILLS REQUIREMENTS: Key Skills (or equivalent): Building CLI Tools Building Python Libraries Python Unit Testing Using Public Cloud Services Java Exposure Restful services CI/CD Understanding of Agile ways of working Strong Debugging skills ADVANTAGEOUS SKILLS REQUIREMENTS: MongoDB Exposure AWS services (e.g., SNS, SQS, S3, ECS, Lambda, KMS, Secret Manager, CloudWatch, CDK, […] TypeScript, NodeJS Atlassian APIs Typescript Being able to talk and think at the strategic as well as technical level – considering different decisions and their long term impact, then turning to code details.

Noticing constraints, and opportunities for improvement – and passionately pursuing solutions. Building purpose and ownership – striving for meaning and excellence and delivering solutions that you are proud of.

Sound understanding of computer science. Great code organisation and quality Commitment to infrastructure as code

Read more:
ITOnlineSAITOnlineSA »

DVF Software Engineer 0059 - Gauteng Pretoria DVF Software Engineer 0059 – Gauteng PretoriaDVF Software Engineer ESSENTIAL SKILLS REQUIREMENTS: Key Skills (or equivalent): Building CLI Tools Building Python Libraries Python Unit Testing Using Public Cloud Services Java Exposure Restful services CI/CD Understanding of Agile ways of working Strong Debugging skills ADVANTAGEOUS SKILLS REQUIREMENTS: MongoDB Exposure AWS services (e.g., SNS, SQS, S3, ECS, Lambda, KMS, Secret Manager, CloudWatch, CDK, […]

Senior Software Engineer (Android, IOS, Nodejs, Typescript) - Gauteng Johannesburg Senior Software Engineer (Android, IOS, Nodejs, Typescript) – Gauteng JohannesburgENVIRONMENT: Our customer operates as a communication platform that links various service providers to swiftly initiate a synchronized reaction. They are currently seeking a Software Engineer, shaping the technological landscape of their company, enabling them to provide cutting-edge solutions and services to their clients in the security industry. You will be responsible for the development, […]

Software Engineer 0198 - Gauteng Pretoria Software Engineer 0198 – Gauteng PretoriaBackend Cloud Developer (Software Engineer)Technical knowledge: Familiarity with Microservices Architecture, Cloud and Container Architecture At least 6 years’ worth of experience using back-end technologies such as Java Javascript / TypeScript / Node.js Python Experience with cloud technologies (Amazon AWS is strongly preferred): Compute: Kubernetes and Severless API Gateway, CloudWatch, DynamoDB, SQS, SNS, Kinesis, S3, etc. […]

Software Engineer (C#.Net) (CPT Hybrid) Software Engineer (C#.Net) (CPT Hybrid)ENVIRONMENT: A passionate coder with 3 years’ experience in Microsoft C .Net is wanted to fill the role of a Software Engineer joining the team of a fast-paced Asset Management Specialist. You’ll enjoy working in a mature development environment where you will participate in design and planning sessions, do code reviews, monitor continuous integration builds […]

Software Engineer (C#.Net) (CPT Hybrid) - Western Cape Bellville Software Engineer (C#.Net) (CPT Hybrid) – Western Cape BellvilleENVIRONMENT: A passionate coder with 3 years’ experience in Microsoft C .Net is wanted to fill the role of a Software Engineer joining the team of a fast-paced Asset Management Specialist. You’ll enjoy working in a mature development environment where you will participate in design and planning sessions, do code reviews, monitor continuous integration builds […]

Kasi soccer skills: Mamelodi Sundowns' Khuliso Mudau kasi skills Kasi soccer skills: Mamelodi Sundowns’ Khuliso Mudau kasi skills Showboating, dribbles and pure kasi soccer skills are an important part of this beautiful game that is loved all over the world.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB) Unveils Atlas for Manufacturing and Automotive – September 22, 2023

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB Free Report) has introduced a fresh initiative called MongoDB Atlas for Manufacturing and Automotive, which is aimed at supporting organizations in leveraging real-time data for innovation and creating applications that take the advantage of intelligent and interconnected technology.

This offering encompasses experts-led innovation workshops, customized technology partnerships and specialized knowledge accelerators to offer tailored training pathways designed for diverse use cases that developers in these sectors encounter.

Organizations can now tap into the potential of MongoDB Atlas for Manufacturing and Automotive to reshape user experiences and revolutionize manufacturing processes. This includes devising novel strategies to tackle industry-specific challenges through innovation workshops and upskilling teams to facilitate the swift creation of modern applications.

Apart from introducing the MongoDB Atlas for Manufacturing and Automotive initiative, MDB has also become a launch partner for the Amazon’s (AMZN Free Report) division, Amazon Web Services (“AWS”) Automotive Competency.

To achieve this recognition, MongoDB underwent a thorough technical validation process to ensure that MDB and AWS can collaborate effectively to assist automotive companies in working toward a future that is autonomous, customer-focused, safe and sustainable. This development further strengthens the company’s enduring partnership with AWS, which includes making MongoDB Atlas accessible through the AWS Marketplace.

Shares of MDB have gained 69.5% year to date compared with the Zacks Computer and Technology sector’s rise of 34.7% in the same period due to long standing partnerships with giants like AWS.

MongoDB Enters Into a Highly Competitive Automotive Cloud Market

The global automotive cloud market size reached $22.5 billion in 2022. According to an IMARC Group report, the market is expected to reach $60.2 billion by 2028, witnessing a CAGR of 17.30% over the 2023-2028 period. Since the introduction of connected vehicles, the rapid evolution of automotive cloud computing, autonomous driving capabilities and intelligent automotive features has been quite remarkable. Car manufacturers worldwide are increasingly dedicating resources to integrating cloud technology to enhance the driving experience.

The incorporation of cloud technology in automobiles holds the potential to save lives by preventing accidents and enables vehicles to communicate effectively with one another. It’s fair to assert that cloud computing, particularly distributed cloud systems, is becoming an essential technology for contemporary and intelligent automobiles.

This Zacks Rank #3 (Hold) company faces tough competition from cloud providers like Salesforce (CRM Free Report) , Oracle (ORCL Free Report) and AWS. You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here.

Salesforce’s automotive CRM is designed to enable active listening to customers and engagement across diverse touchpoints, encompassing online platforms, mobile devices, social media, showroom interactions and connected vehicle interfaces. It also helps in establishing a lasting relationship with customers.

Oracle’s top-tier cloud solutions provide a platform for handling the specific and distinctive core business needs of the automotive industry. Leveraging advanced digital and analytical technologies, its solutions are designed to enhance operational efficiency across various domains, such as supply chain, enterprise resource planning, engineering, product lifecycle management, production, asset management, customer experience and service management.

AWS offers a comprehensive range of specialized services and solutions tailored for the automotive industry. These encompass software-defined vehicle technology, connected mobility solutions, autonomous mobility systems, digital customer engagement tools, manufacturing support, supply chain management and product engineering solutions for companies of all scales from emerging startups to established global original equipment manufacturers.

MongoDB has already acquired customers like HiveMQ, Share Now and Digitread in the automotive and manufacturing sector. The company’s positive impact in this sector is expected to boost the top line as well as total customers.

The Zacks Consensus Estimate for MDB’s fiscal 2024 revenues is pegged at $1.61 billion, indicating year-over-year growth of 48.34%. The Zacks Consensus Estimate for total customers is pegged at 47,871, indicating a year-over-year increase of 17.3%.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DVF Software Engineer 0059 – Gauteng Pretoria – Head Topics

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

DVF Software Engineer ESSENTIAL SKILLS REQUIREMENTS: Key Skills (or equivalent): Building CLI Tools Building Python Libraries Python Unit Testing Using Public Cloud Services Java Exposure Restful services CI/CD Understanding of Agile ways of working Strong Debugging skills ADVANTAGEOUS SKILLS REQUIREMENTS: MongoDB Exposure AWS services (e.g., SNS, SQS, S3, ECS, Lambda, KMS, Secret Manager, CloudWatch, CDK, […] TypeScript, NodeJS Atlassian APIs Typescript Being able to talk and think at the strategic as well as technical level – considering different decisions and their long term impact, then turning to code details.

Noticing constraints, and opportunities for improvement – and passionately pursuing solutions. Building purpose and ownership – striving for meaning and excellence and delivering solutions that you are proud of.

Sound understanding of computer science. Great code organisation and quality Commitment to infrastructure as code

Read more:
ITOnlineSAITOnlineSA »

Software Engineer 0198 - IT-Online Software Engineer 0198 – IT-OnlineBackend Cloud Developer (Software Engineer)Technical knowledge: Familiarity with Microservices Architecture, Cloud and Container Architecture At least 6 years’ worth of experience using back-end technologies such as Java Javascript / TypeScript / Node.js Python Experience with cloud technologies (Amazon AWS is strongly preferred): Compute: Kubernetes and Severless API Gateway, CloudWatch, DynamoDB, SQS, SNS, Kinesis, S3, etc. […]

Senior Software Engineer (Android, IOS, Nodejs, Typescript) - Gauteng Johannesburg Senior Software Engineer (Android, IOS, Nodejs, Typescript) – Gauteng JohannesburgENVIRONMENT: Our customer operates as a communication platform that links various service providers to swiftly initiate a synchronized reaction. They are currently seeking a Software Engineer, shaping the technological landscape of their company, enabling them to provide cutting-edge solutions and services to their clients in the security industry. You will be responsible for the development, […]

Software Engineer 0198 - Gauteng Pretoria Software Engineer 0198 – Gauteng PretoriaBackend Cloud Developer (Software Engineer)Technical knowledge: Familiarity with Microservices Architecture, Cloud and Container Architecture At least 6 years’ worth of experience using back-end technologies such as Java Javascript / TypeScript / Node.js Python Experience with cloud technologies (Amazon AWS is strongly preferred): Compute: Kubernetes and Severless API Gateway, CloudWatch, DynamoDB, SQS, SNS, Kinesis, S3, etc. […]

Software Engineer (C#.Net) (CPT Hybrid) Software Engineer (C#.Net) (CPT Hybrid)ENVIRONMENT: A passionate coder with 3 years’ experience in Microsoft C .Net is wanted to fill the role of a Software Engineer joining the team of a fast-paced Asset Management Specialist. You’ll enjoy working in a mature development environment where you will participate in design and planning sessions, do code reviews, monitor continuous integration builds […]

Software Engineer (C#.Net) (CPT Hybrid) - Western Cape Bellville Software Engineer (C#.Net) (CPT Hybrid) – Western Cape BellvilleENVIRONMENT: A passionate coder with 3 years’ experience in Microsoft C .Net is wanted to fill the role of a Software Engineer joining the team of a fast-paced Asset Management Specialist. You’ll enjoy working in a mature development environment where you will participate in design and planning sessions, do code reviews, monitor continuous integration builds […]

Kasi soccer skills: Mamelodi Sundowns' Khuliso Mudau kasi skills Kasi soccer skills: Mamelodi Sundowns’ Khuliso Mudau kasi skills Showboating, dribbles and pure kasi soccer skills are an important part of this beautiful game that is loved all over the world.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: From Cloud-Hosted to Cloud-Native

MMS Founder
MMS Rosemary Wang

Article originally posted on InfoQ. Visit InfoQ

Transcript

Wang: We’re going to talk about going from cloud hosted to cloud native. It all starts when you say, I want to put an application on the cloud. It seems really simple. Step one, build the application. Step two, figure out which cloud you want to put it on. Step three, run in production. After this process, you say to yourself, that’s great. I’ve now built a cloud native application. Let’s look at the definition. Cloud native is building and running scalable applications in modern dynamic environments, such as public, private, and hybrid clouds. This is a really great definition from the Cloud Native Computing Foundation. When you think about the scenario that I just outlined, let’s answer the question, is it scalable? Are we really in a modern dynamic environment? The answer is, kind of. The reality is that when you put an application on the cloud, there are a lot of obstacles that come up in the process, and the first being, what operating system should I run it on? Should I even run it on the operating system in the first place? Next, you think about how you should package it. Should it be a function? Should it be a container? What should it be? Next, you think about its configuration? How do I configure it to run on certain infrastructure? How do I make sure it’s routed correctly? The next thing that we think about is, does it even make any sense to secure it? Is it something that is running on a private environment? Is it something that’s running potentially publicly? Are there database passwords and credentials that we should be aware of while it runs?

Then, we come to the CI framework. We need to deploy it to the cloud somehow, and deploying to the cloud is complicated, especially when you have network routing considerations in place, and you have to think about it really carefully. Next, we say, ok, it’s on cloud. We’ve done this process. We’ve built our CI framework so that it can deploy to cloud. We’ve taken all of these steps. We’ve thought carefully about these requirements. It must be cloud native. Then you come back to your cloud bill, and your cloud bill shows to you, it’s pretty expensive to run this application. Then you go back to the drawing board. You rearchitect the application, thinking to yourself, maybe now we consider it cloud native, because it’s after all, taking advantage. All of these service offerings. We’ve done all of our research. We’ve done the engineering work to make sure that we’ve optimized it. The reality is, it’s probably not cloud native. The application that you think about putting on cloud isn’t going to be cloud native. Instead, it’s cloud hosted. You’ve built and run an application in an environment like a public, private, and hybrid cloud. It’s not really scalable. It’s perhaps not really the most modern or dynamic application in the first place.

Here, I am going to answer this question, what does it mean to go from cloud hosted to cloud native? Over the years, I’ve realized that cloud native architecture is very complicated. It has a changing definition. While I think that the CNCF’s definition is really useful, and it’s actually probably more thorough, it’s also not as nuanced as the actual implementation. We’re going to talk about the practices and patterns that you can distill down and identify certain architectures as cloud native, without necessarily going to the specific technology and saying that technology is going to give me a cloud native architecture. When you boil down some of these practices to some foundational pieces, and these you can use to build up a cloud native architecture. I have a couple cloud native considerations that I think about. These are some architectural capabilities that I consider important for a cloud native architecture. First is adaptability. Second is observability. Third is immutability. Fourth is elasticity. Fifth is changeability. We’re going to go through each of these, and I’ll give an example of how they are important to a cloud native architecture.

Adaptability – Adjust Between Environments

The first is adaptability. Adaptability is the ability to adjust between environments. This isn’t just environments as in development, staging, and production. This is also environments, whether it be from a public cloud, private cloud, or hybrid environment, or multi-cloud environment. The idea is that you need to be able to adapt your dependencies between different kinds of environments. Let’s take this example. Imagine that I have a data center and I run Kubernetes in that data center, it could be on OpenShift or something else. I also have a Kubernetes cluster that I’ve run on a managed service. This Kubernetes cluster is managed by a public cloud. They’re Kubernetes, this makes it pretty easy for me. I could take one application, run it in my data center, as long as it’s on Kubernetes, I can just bring it to the public cloud. Both are equivalent, in theory. This principle is adapting by abstraction. The idea is that a cloud native architecture often relies on abstractions to improve adaptability. If you need to adjust or change applications between environments, you’re going to use an abstraction to do that.

There is a bit of a caveat to this. Just because you have Kubernetes in your data center and Kubernetes in a public cloud does not mean that it is an easy path to adapt between environments. This is where I think there are some foundational practices that you need in place in order for the adaptability to exist and thus then, for you to have a cloud native architecture. The first is that when you move an application, or some application manifest from one Kubernetes to another, you have to be concerned about versions. Not all Kubernetes resources are available in every version. Second is image. If you’ve ever worked across multiple clouds with Kubernetes clusters, managed Kubernetes often have different kinds of container registries or image pull registry options compared to other ones.

The other important path or obstacle in the path of moving from one Kubernetes cluster to another often involves customization. Many times, when you’ve worked in a data center environment, and you have your own Kubernetes, you have customized certain workflows so that they are aligned with your organization’s workflows. This could be custom resource definitions or other resources. They don’t map perfectly to a public cloud, so then you have to adapt them as well. Persistence becomes an obstacle. If you expect certain persistent volumes or certain resources to exist in the data center Kubernetes, but they don’t exist in the public cloud Kubernetes, then you have to readjust your application to work with a different persistence layer. There are abstractions in Kubernetes that do help with this. For example, a Kubernetes persistent volume claim will help you attach to a specific persistent volume type that is different across multiple clouds. You have to make the effort to use that abstraction as well as build the underlying different persistent volumes. There are actual significant differences, and it’s not quite that easy moving from one Kubernetes cluster to another.

This is an example, but there are many other scenarios in which this problem exists. What are some foundational practices to keep in mind, if you’re designing, you want to use an abstraction, and you need a way to adjust between them? My big tip is to try for just enough abstraction. Kubernetes is one of those examples of a just enough abstraction. There could be other open source standards that allow you to do just enough abstraction, and take away some of the pain of needing to adjust between environments. Just enough abstractions tend to exist in open source standards, but they also may exist in your organization’s implementation of abstraction. Even if you say, for example, I build one API layer to make sure that I query information for security specs or security metadata. That is just enough abstraction to make sure that you’re not just querying a specific tool.

Here are some foundational practices that help you achieve a cloud native architecture from an adaptability perspective. First, decouple configuration and secrets. I think this is something that you might encounter as part of maybe the twelve-factor app perspective for microservices. Even if you’re not necessarily doing those kinds of software architectures, or application architectures, you need to consider decoupling configuration and secrets in a cloud native architecture. The reason why is that as you adapt across environments, you’re going to have different configurations, whether it be development, staging, or production, or differing kinds of configurations across clouds, you need to be able to decouple configuration as well as the credentials you use to access all of those target clouds. Decoupling them will allow you to scale your application, but also it will minimize the effort that you need to take to adapt it to specific clouds. Decoupling the configuration and secrets away from the application or away from infrastructure is one way that you can ensure that you have some consistency in how you are going to adjust a dependency across clouds.

The next is to use dependency injection. The important thing about dependency injection is to apply the abstraction in a way that you can change the downstream resource as well as the upstream resource. For example, if you’re an application that runs on a server, and that application might need the server IP address for some reason, you don’t want to have the application query the server command line, or query the network interface just for the IP address. The reason being, is that querying the underlying interface could differ. It won’t let you do it in a container. It will be different on a virtual machine. It might be different on something else. You would want a layer of abstraction there so that way the application can query the server IP without necessarily depending on the server’s underlying operating system. The way to think about this is to instead use a metadata service for that machine. You call the endpoint for that machine and you retrieve the IP address from there from an API endpoint. That’s an example of using dependency injection to decouple the dependencies. The reason why this is particularly important, especially from an infrastructure standpoint in cloud native, is that you’re often going to change the upstream resources, basically, the ones that depend on the underlying infrastructure. The upstream resources are going to change much faster than the lower-level resources. It’s not going to be easy to adapt a network across multiple clouds. You can’t use the same network declaration from Azure to AWS to Google Cloud, but what you can do is, generally speaking, describe the server in similar manner across Azure, Google Cloud, and AWS. The idea is to use dependency injection. You can make those changes to those upstream resources, and adapt them across clouds without necessarily putting additional effort into the lower-level resources.

The next thing that, foundationally, you need to implement in order for you to get closer to cloud native from an adaptability standpoint, is to write abstract tests. We don’t like writing tests. Testing is hard to justify the effort. When you have to work across multiple clouds, and also you have the cloud native architecture, meaning one that’s fairly dynamic, the thing that will help you is knowing that functionally everything is working as expected. What I usually do is write an abstract test to test the functionality of my application when it’s on a cloud. I call these end-to-end tests. Why are end-to-end tests an important place to abstract? It’s pretty much going to be the same across any cloud. If I know my application is going to submit a payment, it doesn’t matter which cloud it’s running on, it should just submit the payment. In that case, the test itself should have a level abstraction, meaning it should test the endpoint, and the endpoint should return the correct information. It should not matter what the underlying cloud is, it shouldn’t matter what the underlying technologies are for. Investing in writing abstract tests are very useful.

Finally, this is probably one of the more disruptive practices that I’m going to include on the list and you’re going to encounter. This is one that if you do not have this in place, it’s going to make it incredibly difficult to improve the adaptability of your system. That is to update to stable or stable minus one version. This is because most clouds who will offer some managed service, tend to offer different versions. One cloud may offer up to Kubernetes 1.23, another one might only offer stable 1.17, for some reason or another. If you don’t have stable minus one, or stable versions, what tends to happen is that it makes it difficult to adjust across clouds, and/or even across environments. You can imagine that sometimes dev might be at 1.21, but then production might be at 1.17. The reality is that when you have all of these different versions, it makes it incredibly difficult to adapt upstream dependencies across all of these different environments. Updating to stable or stable minus one is a great way to ensure that you are checking the ability for an upstream dependency to move across all of these different environments comfortably.

If all else fails, and these practices you have in place, you’re finding that it’s still really difficult to adapt an application across all these different platforms and clouds, as a last resort, refactor the abstraction. What I mean is that if it’s not working for you. For example, if you’re finding that it’s still incredibly difficult for you to port this application from one Kubernetes to another. That’s usually an indication that that’s not quite the right abstraction. Maybe the application itself does not lend well to Kubernetes, or maybe it’s just not made to run in a container. In that case, figure out what the abstraction is and identify how to best improve that application in order to suit a better abstraction, and make it a lower effort in order for you to adapt across environments. This is something that is the last resort. This is oftentimes the way we jump toward cloud native. We’ll start with these foundational practices, and then, eventually, we’ll resort to the last one, which tends to be a larger refactor effort.

Observability – Navigate Cloud Cover

Next, we have observability. Observability is the way that you can understand how you’re using your cloud as well as how your applications run on it. This is incredibly important. When we talk about being cloud hosted, we have an understanding of usage as well as performance, but we don’t have a really deep understanding of how it interacts as a larger system. As part of cloud native architecture, you need to understand how everything interacts together. For example, imagine that I have a monitoring system in my data center. That monitoring system in my data center is now responsible for not only retrieving the information from the data center, which has a more traditional monitoring server approach, it also has to retrieve information across various syslogs in different machines. It has to get AWS access logs, Google Cloud access logs, Azure access logs, Azure Active Directory access logs, Kubernetes logging and metrics. It needs to aggregate some of the logs from Spring Cloud, some of the logs from .NET applications. Then, any services that we run on top of the infrastructure, so this could be a service mesh, this could be a secrets manager. All of this information now needs to get aggregated somewhere.

The best approach is to set some standards. Notice I don’t say set one standard, set some standards. With this heterogeneous set of workloads, as well as services, platforms, it’s really difficult to have one standard. You’ll spend way too much time trying to organize everything and format it into one uniform data format with the correct fields and the correct values. While there is some value to that, the amount of effort you spend in doing that does not necessarily give you the additional benefit. What is the alternative? When you set some standards, identify standards that you can adopt from an organizational standpoint, and fit just enough of your workload footprint. For example, if you have Prometheus, Prometheus is a way for you to have open source standard for metrics and formatting. That serves its purpose for a number of metric servers. A lot of metric servers will pull from different endpoints and use Prometheus format and metrics. You can also look at OpenTelemetry to add instrumentation to the application itself. It works across a variety of programming languages and frameworks. There’s also Fluentd. Fluentd will help you extract logs from a machine, and then send it to a target in a more structured format. There are little ways that you can do this, and a number of them are in open source standards. Again, the thought is that abstraction will help you adapt.

However, there is a point to meeting resources where they’re at. Sometimes, it’s just not possible. You can’t add the OpenTelemetry library in, instrumentation is just too difficult. You already have metrics set up for an application, you really don’t want to refactor and add a new library in when you don’t have to. There is a point to meeting resources where they’re at, they’re setting some standards. If there are resources that absolutely cannot be refactored, or the level of effort is just too high, and there’s no real value from that, meet it where it is right now and just take the information. When we talk about taking that information in, there are some foundational practices to taking that information. The first is tagging and adding metadata. If you are using an existing metrics library in an application, make sure you have consistent and standardized metadata. This metadata should be fairly uniform across infrastructure, applications, and other managed service offerings. Identifying and architecting the correct metadata or the proper metadata for identification will help you identify resources and get a better end-to-end picture of what’s going on in your environment. It’s very difficult to justify tagging and adding metadata after the fact, but it is worth the effort especially from a billing and compliance standpoint.

Enable audit and access logs. This seems to be pretty intuitive. Most people don’t do this until after everything has happened. They decide, ok, we’ve built and engineered this, now we can enable audit and access logs. The reality is, audit and access logs are pretty powerful. They not only give you a security view into who’s accessing and looking at what in your environment, but it’s also a very useful way to track transactions and interactions as well. This is important in a very dynamic environment. When we talk about cloud native, it’s often in a very dynamic, ever-changing environment. A container is coming up, a container is coming down. It becomes really difficult from a development perspective to understand what is happening in that system, when a container is only available for perhaps 10, 15 minutes, and then suddenly, it’s been destroyed for some reason. Enabling audit and access logs are incredibly important.

On top of that, in a cloud native architecture, we tend to skew toward continuously delivering the resources. The reason why is, again, we want to take advantage of the dynamic environment. When we think about continuous delivery, it becomes really easy to say, I’m going to just use a CI framework, and let it have access to deploy anything it needs to. That is access. In order for you to understand the access that your CI framework has, and to properly audit its automation, you do need to have logging available for that. You’ll need to do a lot of automation from a cloud native standpoint. It’s critical to understand that any automation you do needs to have access to something. Even if you’re really great at least privilege, meaning minimizing the amount of access that an automation piece does have, you still need a way to log and audit it. It’s not just from a security perspective, if you’re a developer or an engineer who’s working on it, you need it to reconstruct, oftentimes, interactions between a system. Enable those audit and access logs.

The other thing that you’ll need to do is aggregate telemetry. This is a little bit more difficult. Sometimes, it’s not so easy to aggregate telemetry without finding a new tool or technology and installing it in the process. There are ways that you can aggregate the telemetry into a couple different targets and make sure that that information exists. Making sure you aggregate the telemetry will help add, again, a level of abstraction for you to adapt across different environments. If you can aggregate telemetry across other clouds, that allows you the ability to understand how applications are interacting across different clouds versus within a cloud. Standardizing and indexing telemetry comes after you’ve tagged and added the metadata. Standardizing and indexing telemetry does allow you to trace the transactions within an environment. It traces transactions as well as interactions from the application level to infrastructure level. Having some telemetry that you can search on and specific fields that you know will exist will help you identify later on what resources are important, and what resources are not.

Finally, if you’ve done all of these foundational practices, and you find yourself struggling to make changes to your application, it’s still probably mostly a cloud hosted application, it’s not really cloud native. In that case, maybe assess a push versus pull model. More of the traditional monitoring systems use a push model, meaning there’s an agent and it collects the information and pushes it out to the metric server. Or you bundle an agent with the application and it pushes those metrics or telemetry out to a server somewhere. In more recent years, what we consider more of the distributed approach is to pull so you have an agent that’s sitting either in the environment, or on the host level, and it pulls from multiple endpoints, and then sends it out to the server. Assessing a pull-based approach is one way that you can look at getting closer to cloud native, and this will help you scale in the future. It doesn’t mean you have to redo your entire systems. It might just mean that changing one or two configurations on the monitoring agent side to say, we only really need you to exist on one host, and you can scrape all the hosts in this region, for example. Assessing the push or pull model will help scale specifically the observability piece of this. That way you don’t have to rearchitect or rebuild the entire monitoring system just so that you have more visibility across your cloud environment. As a last resort, then rebuild your monitoring system. Sometimes this is not something you can avoid. If you have a lot of older systems in place and you have a lot of bespoke monitoring, sometimes it’s better just to rebuild the monitoring system and standardize with these practices in mind.

Immutability – Keeps Up with Dynamic Environments

Immutability is the next capability that I think about when it comes to cloud native architecture. Immutability helps you keep up with dynamic environments. To describe immutability, I’m just going to go through this example. Imagine that you want to update Java for an application. One way that you could do it is to log into the server that houses the application, and then update the Java package on there. You run into the danger of the application itself potentially breaking. Maybe other dependencies on the machine rely on Java, and now you’ve broken all the other dependencies. Now you have no server running and no application running. That actually affects your systems as well as your customers’. In recent years, we’ve moved more toward the immutability approach, where we deploy a new application binary with a new underlying system with the updated Java. Rather than log in and update Java, what we’re doing is we’re taking a new application, as well as a new instance of the application’s environment, deploying it out, and it has the updated Java. If it does not work, you can always revert to the old version. If it does work, you can simply delete the old version. This helps with the overall reliability of the system.

Immutability helps you roll out new resources for most changes. There’s a caveat to this. Sometimes there are changes that are related to configuration or secrets. Imagine you need to change the database secret, and you don’t really want to do it in the application. You don’t want to just deploy a new application with the new database secret, because maybe it’s been compromised. Now you’re just adding more fuel to the fire. What you’ll do instead is perhaps just change the application and say, the application, please reload, because there’s some configuration change or some password change. Then it will actually reload. This is not immutable, this is mutable. Some changes are mutable. You have to keep that in mind. There are some changes that are mutable because you’ve added a level of abstraction. In this case, because maybe you’ve shifted configuration and secrets into a different management store, the changes can be mutable on the application side.

From a foundational practice standpoint, it’s important to do all of the things you can to be immutable, but know that there are some things that are mutable. Let’s think about that. The first is automating infrastructure as code. Infrastructure as code tends to assume that the infrastructure is immutable. There’s very few places in which you’re updating infrastructure in place when you have infrastructure as code. If you have the automation in place to do that, you get the principle of immutability out of the box, which is nice. Decoupling state and configuration will help you separate the pieces that require immutable changes, away from the ones that are able to be handled immutably. What I mean is that if you have data that your application is writing to or data that your application needs, decouple that from the application that is running. Decoupling the state and configuration becomes an important part of cloud native, mostly because your applications will have to be able to adapt and run anywhere, but your data may not. In which case, you might find yourself needing to further decouple the data as well as the app from the application. Decoupling state and configuration is an important foundational step.

The next is reloading and restarting for changes. Not all changes are done immutably. Some of them are mutable. I just covered some of them. Reloading or restarting the application is important. It helps you make a mutable change without necessarily changing the application itself. One good example of this is that if you change a database password and you’re using Spring Boot, you could use the Actuator API endpoint, and basically tell the endpoint, reload everything, reload the database connection string. What this will do is gracefully make sure that all the other connection strings are shut down before retrieving the new database password and reloading and reconnecting to the database.

Finally, optimize provisioning. I can’t emphasize this enough. When you talk about immutability, it only works if you can create resources quickly. Sometimes, you will have to wait. For example, if you have large clusters that you’re provisioning, it makes more sense not to necessarily do that from within an immutable perspective. You sometimes want to make sure that you need to get these resources really quickly, from a functional standpoint. You also want to make sure that you’re not just affecting the system, because you’re spending a long time waiting for new resources to come up. Make sure you’re optimizing provisioning. It’s important from at least an architectural standpoint, to ensure that anything that you provision, you can do it repeatedly, but also, you can do it fairly quickly. Because if something is broken, you’ll want to take advantage of a new environment, you’ll want to use immutability to create new environments and restore the system. Optimizing provisioning becomes an important piece and practice to that.

Finally, distributing data. Distributing data is a little complicated, but at some point, you’ll realize that when you have data in a cloud native environment, you need to figure out what to do with it. That’s when people start to move toward different kinds of datastores. They move away from the databases, they do some kind of other distributed datastore of some kind, or a distributed database of some kind. What this will do is help treat data immutably without treating the content of data immutably. You still preserve the data itself, but the way that the data is being handled and distributed, is treated in a mutable fashion across your cloud. As the last resort, then you refactor for immutability. This is when you just don’t have any other options. If you don’t have infrastructure as code and you treated your infrastructure immutably before, you have existing infrastructure, and now you need to try to manage it a little bit better. In this case, you may have to undergo a significant refactor so that you have new resources that are managed by your infrastructure as code deployment.

Elasticity – Make the Most of Resources

The fourth capability that is a little bit more complicated to talk about is elasticity. Elasticity is the ability for you to make the most of your cloud resources. I think this is the hallmark of being cloud native. Is your application elastic? What I mean by that is that most of the time when we talk about moving an application to cloud, we think it’s a straightforward, ok, I run it in a data center, it’s been updated to all its latest versions. Now I’m just going to pick it up and move it and run it into a virtual machine in cloud. That works, except then you realize that it’s quite expensive to run that application on a virtual machine, because what you’ve done is you’ve spec’d out the virtual machine to be the same size as the one that’s already in your data center. That doesn’t necessarily improve your cost. Elasticity is actually about the cost of time. In the cloud, you’re getting charged per hour, or per unit, or per run, or how long that run is taking. Elasticity is about taking advantage of the time that you have for that resource. It is all about time.

What do we mean about trying to optimize the cost of time? We traditionally thought about optimizing cost as well as taking advantage of elasticity as the difference between vertical and horizontal scaling. Vertical scaling is a focus on resource efficiency, meaning, if you have an application, you’ll get x number of CPU and x amount of memory associated with it. It works, but we found out quickly that most of the time, we weren’t taking advantage of all the CPU or memory. We thought about maybe going to horizontal scaling. The idea is that we increase the workload density. We have smaller instances scheduled on a lot of different machines, and these smaller instances can do parallel processing so it can better serve requests. It’s not necessarily an either/or, but most cloud native architecture approaches, the general assumption is to do horizontal scaling with increased workload density. Cloud native doesn’t always mean horizontal scaling. It’s actually pretty complicated, because not everything can be horizontally scaled, but also, you’re not going to get the benefit of elasticity just by horizontal scaling.

What do we mean by this? There’s a couple important practices to keep in mind. The first is to evaluate idle versus active resources. It’s not necessarily about horizontal scaling, vertical scaling, smaller instances with many instances versus fewer instances with larger instances. The idea is that what is idle versus active. The reason why horizontal scaling is really appealing from a cloud native architecture standpoint is that it’s taking advantage of active resources. It’s using as many active resources as possible while minimizing idle. There are situations in which you absolutely cannot maximize all of those active resources. In the case of perhaps data processing, it might not make any sense to do horizontal scaling. It might just make sense to do one process, spin up a VM for an hour, and then shut it down. That’s it. Evaluating the idle versus active resources become important. For a more simplistic win, from an elasticity standpoint, when you’re looking at your cloud environment, understand what resources are truly being used. If things are not being used, you can shut them down. A good example of this is that if you have a development environment and you’re only using them in weekdays, maybe shut it down over the weekend. That will save you some money. Evaluating the idle versus active resources actually becomes more important than immediately refactoring your applications to handle horizontal scaling.

The next is optimizing warm-up and runtime. When you have jobs that get processed, or you have resources that get scaled up and down, it might be virtual machine instances. Actually, all of the public clouds now have autoscaling capability, so if you scale up, then you can scale down. Optimizing that warm-up becomes incredibly important, because you are again, charged per hour. If it takes you a very long time to warm up those resources, and then by the time you run them you’re already being charged by the hour, it’s not going to be helpful for you. Instead, you’re going to find that you now have some idle resources, that are available to you. Those are what I would consider idle because they’re not ready for use yet. Optimizing the warm-up and runtime becomes important. You can use immutability to help with that. For example, if you are currently using user data to configure your virtual machine, it might take optimistically 15 minutes to do it. That 15 minutes might include time to install packages. Instead of doing that for 15 minutes, maybe what you do is you use an immutable image, where you build the virtual machine image and you just provision it with all of those packages in place, and all you do is add configuration.

The next is assessing volume versus frequency. The way I best describe this is that in the data space, in the past couple years, I’ve noticed folks using lambdas to process data. That works. That works fairly well. AWS Lambda, using functions to process the data. What they’re doing is they’re saying, I need to create x number of lambdas whenever this happens, and we’re pretty much whenever something comes into queue, and it’s good. It does work. The lambdas do process our functions, and, for the most part, the price point is ok for them. Then there comes to a point when you have a lot of jobs, or a lot of data that you need to be processing. The volume of that data does not justify necessarily using lambda anymore. It becomes more expensive. Especially if the lambda functions need to use private IP addresses in the virtual network. Using a private IP address in the virtual network, it takes time to free up that interface. By that point, while you’re not necessarily getting directly charged for it, you are finding that lambda might be waiting, just so that gets allocated those IP addresses in a private network. By that point, you might as well switch to some other formalized data processing tool, whether it be EMR or something else. The idea is that you want to assess whether or not the volume and the frequency by which you are creating these resources outweigh just having those resources exist, and in being in place. This is the same reason for maybe having a pool of Kubernetes cluster workers, or something like that available. Sometimes you just need that on hand, and it does justify the cost. Assessing volume versus frequency becomes an important part of optimizing for elasticity and making sure you use the most of your cloud resources.

Then, as a last resort, you rebuild to mitigate cost. Refactoring to mitigate cost is a very costly effort, and it takes some time. It also involves oftentimes adapting to new abstractions. If you may be working with a system that was originally not meant to run on cloud, and then all of a sudden now you’re trying to mitigate the cost of it, because you did this lift and shift, you will have to replatform. You will have to ultimately rebuild and rearchitect in order to fully mitigate the cost of the lift and shift. You’ll have to find a technology that does work for your architecture and the functions that you want. That is a last resort.

Changeability – Use the Latest Technologies

Finally, changeability. We all want to use the latest technologies, but it’s not that easy to change an application to use the latest technology, so, how do we do it? Let’s imagine this example. I’ve been hearing a lot about this more recently. You have a CI framework, and you’ve started looking into the cloud native technology of GitOps. I think it’s a really fascinating space. It’s also really helpful for certain use cases. You decide, this is really useful for me, I want to continuously deploy. I want to take advantage of blue-green deployments, in that I want to make sure that when I do a blue-green deployment, it’s completely automated. My canary will increase automatically, and I don’t have to worry about it. That’s great. The only problem with this is that the latest often involves paradigm shifts. Telling your security team, or telling your management team, or telling anybody that they’re going to remove a manual quality gate, or a manual gate to production, and automatically do this wishy-washy, blue-green thing, is really hard to sell. I think it’s really cool, but what I’ve noticed is that it’s not so easy to describe. Most people are wondering, what happens if it fails? The answer is, most of these technologies, especially in the GitOps space, will roll back for you. On the other hand, it’s not so easy to just trust that assumption. The latest often involves paradigm shifts.

Instead, what you might think about doing is changing, not the tool, but changing the paradigm first. Rather than say, I have a CI framework, and I’m doing continuous delivery now. What you might consider instead is, ok, in order to take advantage of all of these cloud native benefits, what I’m going to do is take an intermediate step. I’m going to do some modified continuous deployment on my CI framework. That could be perhaps Spinnaker. You might say, I’m just going to use Spinnaker first, and what I’ll do is I’ll do a manual deployment and manual check. Once I’m comfortable with the canary and the blue-green approach, then maybe I’ll shift to a more continuous deployment approach, or maybe I’ll even shift to the GitOps approach. There are a lot of intermediate options. These intermediate steps exist for change. It’s not to say that you should immediately adopt a tool and that will immediately help you. The reality is that most of the time when you’re looking to go from cloud hosted to cloud native, you’ll need to take the intermediate step, especially if you plan on such a drastic change, that changes the underlying assumption of how your application behaves and how it’s being deployed.

Of course, the first thing you’re going to do is assess the benefit, but in a more nuanced perspective, you do want to assess the benefit of changing the paradigm, of changing the assumption itself. The other thing you’ll want to do is review all the previous patterns I talked about. All the foundational practices for immutability, adaptability, all of those. Review that you have those foundations in place. If you don’t, it will be incredibly difficult to change the tool after that. Then, choose an intermediate step. In the case of the CI framework, maybe we modify our CI framework to do a manual blue-green. Once we’re comfortable, then we can move toward an automated blue-green deployment, or an automated canary deployment.

Finally, refactor your application or infrastructure in order to accommodate that intermediate step. In the case of the CI framework that I was talking about, and also moving towards some continuous deployment approach, you will need to refactor the application to expose metrics. If your application does not have metrics, then it will not work in the continuous deployment approach. The reason why is that an automated deployment needs somewhere to retrieve the application metrics. You need either an error rate or a composite metric for you to understand how the application is behaving, and whether or not is failing or succeeding. Without that metric, you cannot do an automated deployment. For the most part, it’s also hard for you to determine whether or not you should increase traffic to that new application. You do have to refactor the application or infrastructure to accommodate for that metric. In which case, it will stop you from adopting the next latest and greatest approach.

After that, if you find that you’re having a really difficult time with that intermediate step. You’re finding that the intermediate step is not sufficient for you to graduate to the latest and greatest technology, build a greenfield environment. It’s not easy. It is going to be a very high level of effort. You do have to assess if it’s a high value as well. Most of the time, if you’ve done these basic steps, you’ve done an intermediate, and it’s still not quite working, and you’re still finding that you need to go to this new latest technology, because it has some significant business benefit to you, then you build a greenfield environment.

Summary

Returning back to the definition of cloud native architecture. I’ve gone through a lot of these foundational practices that move you from cloud hosted to cloud native. The focus of these practices is not to necessarily say that you will be cloud native. The focus is that you’re going to have a more scalable application and you’re going to take advantage of the dynamic environment in place. As these technologies change, year over year, month over month, even, you have the ability to change with them. You can adapt with them. In summary, here are the considerations. Again, remember that if you want to get to a cloud native architecture, consider the adaptability, observability, immutability, elasticity, and changeability. All of these will help contribute to a more cloud native approach and your application to better adapt to changes in your architecture.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Establishing an Open-Source Community Inside an Organisation

MMS Founder
MMS Natnael Belay

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Hey, folks. Before we get into today’s podcast, I wanted to share that InfoQ’s International Software Development Conference, QCon, will be back in San Francisco from October 2 to 6. QCon will share real world technical talks from innovative senior software development practitioners on applying emerging patterns and practices to address current challenges. Learn more at qconsf.com. We hope to see you there.

This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, I’m sitting down with Nate Belay. Nate is in East Coast USA and has been doing work on setting up open source communities within organizations. Nate, welcome. Thanks for taking the time to talk to us today.

Introductions [00:52]

Nate Belay: Thank you, Shane. It’s really an honor to be part of this podcast and speak to the InfoQ community. Thank you for having me.

Shane Hastie: The place I like to start is who’s Nate? Tell us a bit about yourself and your background.

Nate Belay: Nate is a recent grad, 2020, from college. I have three-years’ experience working in the software technology realm. For a couple years, I worked as a program manager where I was overseeing the R&D teams, the engineering teams, and making sure that they deliver some of the innovative products that we were working on. It was for a company that was based out of Boston called PTC. There, I was exposed to a cultural shift within the organization in terms of development as they were moving to a SaaS development model. And there were inklings about could we adapt some open source development models? They were very early on in their lifecycle for that.

Then after working there for two years, I moved to Google as a technical program manager in the Android organization. Being embedded in the Jetpack subteam, I was exposed to a lot of open source development. The team, I realized, was very mature in their open source development lifecycle and it’s embedded within the culture and the ethos of Android. I had the experience to experience both worlds where one company was early on and the current company is very mature in that. That’s what inspired me to come here.

Shane Hastie: Why would an organization embrace an open source culture? Perhaps one step back, what is an open source culture?

Defining an open source culture [02:33]

Nate Belay: I look at open source as really taking a step back from technology. I look at open source as an opportunity for a group of people to collaborate, to develop and improve any kind of artifact. It could be a recipe for food or it could be a Wikipedia page or it could be a software product. It’s a very open platform for creative people, like-minded people to create a community, and then contribute to the betterment of whatever artifact that they’re working on. That’s how I look at open source from that lens. Within the organization, that type of mentality really helps make the culture of the company be more open and transparent. My argument is the principles and the guidelines of open source could be applied within an organization, within the context of an organization, and make that really an open culture within the organization.

Shane Hastie: For an organization to move in this direction, you mentioned open source principles. What are some of those key principles that they would need to bring in play?

The need for clear open source principles [03:44]

Nate Belay: Some of the core principles that they need to bring in play is having a clear mission. To become an open source, you really need clearly defined mission of why you’re doing it and who you’re doing it for. One of the reasons that you want to do that is as you’re managing a community at scale, you would want to instill some of the best practices and values that community stands for. That’s really easy for a community to diverge from what you want, especially if you are a company that has an open source product and the community that you’re supporting does not align with your values, that could reflect on the brand of the company. You want to be really careful about who you’re targeting or who the open source project is for and some of the contribution guidelines for that. I think clear mission is very important.

The other one is having, I kind of mentioned this, but clear guidelines. These guidelines could be three types, in my opinion. One is about the ethos of the community or the product that you’re trying to build, which I touched upon just now. Then the second one could be technical guidelines and principles. These could be best practices around, for instance, API design. When contributors of the open source community try to merge in code, you can have best practices embedded either within the development workflow or something that’s written down so that those contributors can have an understanding of what’s expected of them as they’re contributing. Very technical type of guidelines and principles to do that.

The third could be about code of conduct. In my opinion, this means how the community works together among themselves. This could be being respectful, being inclusive, and things like that. Because one of the challenges with open source is the lack of face-to-face communication. You might be working together or getting pulled requests from someone from across the world that you’ve never met. As you’re reviewing code, the likelihood that the review could be very challenging or can be uninclusive is very high. You might need to have code of conduct to instill the principles of when you’re referring to people, these are the accepted values that you can’t violate those, and you need a community manager to make sure that those are adhered to.

Having a technical understanding of the values from technical perspective that you want to instill on the product that you’re building, code of conduct for the community members to work together. And also the ethos of why you’re building the product and who your core customers are or who the core consumers of that product are so that people can build empathy. There are a few things that I can touch upon as well in addition to the guidelines. One of them is making sure that open source community that you’re trying to build that starts within the organization is not affected by the org structure that you have within the organization.

Meaning, I believe it needs to be very separate and a flat hierarchy where people are seeking out other people based on their expertise and not their level at the organization. Even if a VP tries to submit code, they’ll need to go through the same process that has anyone else would do. They would need to go through code reviews and things like that. Because in a community, the need of the community comes before the need of the individual. We need to make sure that people are seeking out other people because of their expertise.

The last one is very product-centric. Meaning as you’re building the product, you need to make sure that all the products that you’re using, the third-party products that you’re depending on are also open source. Because if you’re trying to build a thriving open source community, and if the only people that are able to use that or able to build on top of that are within the organization, meaning that’s behind some API gateway and you need to pay for access or whatever, that doesn’t provide a very thriving community. All the kind of resources, as much as possible, all the dependencies that you have need to also be open source so that anyone can check out the code build, test merging code. Those are the recipes that an organization can think about before starting an open source community.

Shane Hastie: How can this go wrong?

Pitfalls to avoid [08:12]

Nate Belay: That’s a good question. I think if the organization is not ready, it is very easy for things to evolve into a closed source system. Meaning it starts off to just get some eyeballs and improve the brand of the company and people start to rely on the product that’s being open source. There’s a whole community around that. And if the major contributor to that community is one organization or a couple of organizations, they might choose to change that to be closed source. At that point, external people who were depending on that, now all of a sudden don’t have access. We’ve seen that happen a few times. For different reasons, this could happen. That’s very detrimental, in my opinion, to the organization that’s doing that and also to the community that was relying on that.

Another way it could go wrong is if the code of conduct doesn’t get adhered to. Meaning if you’re seeing that in code reviews, people are giving really harsh and mean, rude feedback on a code that you’re trying to merge, the likelihood that you’d merge a code next time is very low. If the contributors are not there, then really it doesn’t really make sense to have even an open source community and invest in that. Those are a couple of things that could go wrong. Obviously, there could be more, but I think those are the highest priority.

Shane Hastie: Turning that around, let’s take the way we give feedback, for instance, on pull requests, on code reviews. You mentioned guidelines. But what level of enforcement? Or is it enforcement? Who takes ownership when they see things not going the way they’d like?

There needs to be stewardship of the community [09:53]

Nate Belay: I think the ownership should be whoever is the steward of the community and the community members themselves. For instance, it’s very common for a project to be open source, but it’s one company that’s doing majority of the development. Anyone can look at those source code and merge changes, but the main steward could be one company. In that case, it’s up to that main steward to be able to make sure that those are adhered to. With the current evolution in AI, for instance, a lot of that can be automated. We have tools right now to track specific keywords and that could obviously get even smarter to understand context so that it automatically flags that.

It is upon the main steward of the community to make sure that there are tools and practices in place to enforce the community guidelines. But a step further than that is if people are bought in into the community that they’re contributing to and they see behavior that’s not in line, I would argue it’s also on them as they also have a responsibility to flag those to the community manager or whoever is in charge of facilitating that community to a program manager or whatever. I think a combination of tools, automation, and self-flagging, a lot of that could be weeded out.

Shane Hastie: You mentioned that you feel that this needs to be separate from the organizational hierarchies. If I’m a leader in an organization and I want to get the benefit of even internally open sourcing the work on producing, whether it’s code or whether it’s other artifacts, other types of work, this means we’ve got to actually step back from, perhaps, some of the organizational structures and incentives and rules, perhaps.

Open source can exist alongside organisational structures [11:51]

Nate Belay: I think that both could be true. I think the open source community could be happening in parallel to the organizational structure. Meaning the VP or whoever is in charge could give the blessing for that community to get started and then to also become open source. Because within the context of our organization, I can’t start an open source community just because I want to, because there might be things that I’m not aware of. For instance, regulatory issues or secret projects or whatever. First of all, you need to get alignment with the hierarchy that you have already to rely on that to get started.

But once you have started, the community should have its own charter and should have its own leadership. Not in the original sense of have a boss, to have your own boss, but a group of community experts or experts in that specific topic can have their own forum to manage that community. But to get started, I think you need a blessing from someone within the organization who has authority to get that kick started. But as soon as that happens, it is important to make sure that those are very separate and would have their own charter. It is possible that after that community has started, that open source project started, that leadership might decide, “Okay, I am investing in people who are working on this but we are making the code available for everyone. Why should I do that?”

If the open source community is thriving, let the open source community do that, that is possible, and pull the plug on the project. There’s no way around that. Other than that, hopefully, the community outside of the company is strong enough to carry on the work. But that’s definitely the constraints that we’re working with, because at the end of the day, they have to approve the budget. But hopefully, people work under best intentions in mind and when they start these projects, they’re intentional that we want this to be very separate from what we’re doing inside of this company.

Shane Hastie: Again, from a leadership perspective or even as an individual contributor, what proportion of my time and effort I would be putting into the open source versus my paid job? Or if, really, both is part of my paid job, how do I allocate my time?

The dilemma of time allocation [14:16]

Nate Belay: I think it depends on the value of the product. Let’s say the feature that you’re trying to develop really belongs to be open source, like part of the open source, and what is the priority of that feature. It’s not a question of what percent of my time doing open source work versus what percent of my time should I work on the internal projects, but more of I have a feature deliverable that I need to get out now, where does that belong and where would the end user benefit the most if I put it inside of the company or more become open source? That’s a decision that needs to be made at the start of a project.

But after that, it is possible for you to spend 100% of your time doing the open source project as long as you’re delivering that feature. Because that feature could be helpful both not just for the open source community but the company itself because we might have other products that are going to rely on that feature. When we are doing our annual planning, when we’re doing our feature planning, whatever process that we are using, we should decide, “Okay, where should this live? Do we have any competitive edge on it?” If not, then as much as possible, we can push things out to be open source and we can dedicate most of our time there. If it’s not, and that feature should be developed inside because of proprietary knowledge, then we can balance that as needed. I would say I would take that more on a feature-by-feature basis and not as a blanket percentage of time commitment.

Shane Hastie: What are the other gotchas? What are the things that you’ve seen that people need to be aware of before they go down this path?

Some potential mistakes to avoid [15:55]

Nate Belay: I think that the biggest one is not having a clear what the end product we want is. We assume that just because we have loose connections between, “I think a community kind of exists, let’s get that rolling.” Without really doing the research of, “Okay, how many people would be interested in this?” Because there’s going to be a lot of investment. If you try to do an open source community, most of the time, that ties together with having other investments like a common infrastructure for building and testing code for instance. That has its own cost associated.

Let’s say you go through all of that set up, getting approvals from legal and whatnot, but the community is not there really and no one really benefits from it, I think that could be a wasted effort. Although I think that’s rare because most of the time, having things open, someone would find it helpful and learn from it at the minimum. But before doing a lot of investment, I would encourage people to actually do the legwork of, “Is there a community around us and should I continue my infrastructure related investments to support the community?”

Shane Hastie: Anything else you’d like to tell the community about being part of and establishing open source within our organizations?

Benefits for the organisation and the community [17:18]

I think I can expand a little bit on why someone would want to go down this path. I touched upon it throughout my conversation here a little bit. But why would you want to set up an open source community? It seems counterintuitive, especially from the context of an organization. Some people refer to this as inner source for the project is open within the organization, but it’s not publicly available. I tend to see them with the same lens because they should have the same guidelines and principles. I think before, one of the benefits of setting up an open source community is the sense of inclusion and ownership that employees would feel if you start an open source project within the organization.

One of the reasons is if you ask people… Because if you have open source and you merge a change, you’re displaying the values and the core principles of that community. So naturally, that code would go through rigorous testing and reviews and that would really help other people understand the work that you’re doing. And also, if I’m reviewing someone else’s code and I’m giving them feedback and they immerse that code, even though I’m not directly working on that feature, if that feature ships, I have, “I’ve made an impact on someone else’s product.” And it makes someone feel fulfilled at some level.

Another thing is also it provides inclusion. Like if the organization has this kind of development model, open source thrives on meritocracy. Like I said earlier, it’s not really based on the organizational hierarchy, but what can you do to the community and what’s the quality of your contribution. That ignores some of the biases that other people have. At the same time, it could be a problem as well. It’s a double-edged sword. I talked about the downsides of the risks of it being very hidden or not knowing who the other person is, but it can also have a benefit because you are judging someone solely by their output. That’s one of the benefits why someone could go down this path of it provides your employees with inclusion and a sense of ownership.

Another one is better resource neutralization and resource as an infrastructure resource. What I mean by that is usually, open source ties itself with a shared infrastructure for development. If we have each developer provisioning their own VMs and running jobs there, that inspires and motivates low quality code development because you’re not really incentivized to optimize resource usage. As companies move to an open source model, they usually also have a central infrastructure management unit that manages the servers for them. What that gives benefit is each person’s job to the server, each person’s request to the server can be dynamically scaled up or down.

That also speeds up the cloud readiness, the digital transformation of an organization because the actual development model supports that type of infrastructure. It doesn’t need to be the big cloud vendors, but an organization can have its own server pool, and then provide service to developers in a fair way where users can be scaled up and down based on the community usage or your usage. That way, you’re also having an impact on the environment. Because as teams move towards a shared infrastructure and a cloud type of infrastructure, their compute energy usage goes down. So you’re using less energy, which translates to better environmental impacts.

There was one study that Etsy reduced their compute energy usage by about 50% by switching their development to the cloud. That can help an organization speed up their cloud migration if they actually change their development model to become open source. That is if you want to put numbers onto why would I want to do this. There could actually be some cost savings. In addition, for instance, you might not even need to buy expensive laptops for the developers.

If you have a server pool that you can build things on, you can have a laptop on the lower end for the developers and they can send a request to the server for the actual build and testing. That way, you minimize the cost of the laptops that you need to provide for the employees as well. So there are some cost savings by going through this migration. Those are a couple of things that I don’t think I mentioned about the benefit of why encourage teams to go to the open source development model.

Shane Hastie: Thank you very much. Some interesting things and some ideas that people can explore. If people want to continue the conversation, where do they find you?

Nate Belay: I’m on LinkedIn. Natnael, N-A-T-N-A-E-L, Belay. I believe we can link it on this episode.

Shane Hastie: I’ll put it in the show notes.

Nate Belay: Also, I can have my email as well on the show notes. I’m open for discussion. Like I said on my introduction, I’m now responsible for the Android Jetpack Program, which is an open source so I get to interact with the open source community. I also had an experience where trying to start an open source development model within an organization was challenging. I’ve kind of had an experience of both worlds. I’m happy to engage with other people who have similar or very different experiences or any questions around this because it is an evolving artifact and we’re seeing more of this in the world with open source AI models. Everything going open source. This is definitely more of a trend now. I’m sure people have experienced their own experience, so I’m happy to engage in conversations there.

Shane Hastie: Right. Well, thank you very much indeed.

Nate Belay: Thank you. Thank you for inviting me over. Let’s continue the discussion with anyone who’s interested. Thank you for having me on the show and thank you to the listeners.

Mentioned

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Top Key Players are MariaDB, Sybase, MySQL, Microsoft, PostgreSQL Oracle Database,etc

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

PRESS RELEASE

Published September 22, 2023

Social-Media-Graph-01

Global |112 Pages| Report on “SQL Market” offers a detailed Research In-Depth Analysis (2023-2031)which is expected to witness remarkable growth in the coming years. The implementation of new technologies and innovative solutions will drive the market’s revenue generation and increase its market share by 2030 with Revenue by Type (Text, Number, Date) and Forecasted Market Size by Application (Retail, Online Game Development, IT, Social Network Development, Web Applications Management, Others).

Get a sample PDF of the report – https://www.marketreportsworld.com/enquiry/request-sample/23901581

This report offers a comprehensive analysis of the SQL Market, encompassing its present condition, key players in the industry, emerging trends, and prospects for future growth. It delves deeply into the global market scenario, providing valuable insights into current trends and drivers influencing the SQL Market on a global scale. The report also includes statistical data on revenue growth in various regional and country-level markets, as well as an assessment of the competitive landscape and detailed organization analyses for the projected period. Moreover, the SQL Market Report explores potential drivers for development and examines the current market share distribution and adoption of various types, technologies, applications, and regions up to 2030. Ask for a Sample Report

Who is the largest manufacturer of SQL Market worldwide?

  • MariaDB
  • Sybase
  • MySQL
  • Microsoft
  • PostgreSQL
  • Oracle Database
  • Basho Technologies
  • MarkLogic Corporation
  • MongoDB

The Global SQL Market is anticipated to rise at a considerable rate during the forecast period, between 2023 and 2030. In 2022, the market is growing at a steady rate and with the rising adoption of strategies by key players, the market is expected to rise over the projected horizon.

The global SQL Market is divided based on application, end user, and region, with a specific focus on manufacturers situated in various geographic areas. The study offers a comprehensive analysis of diverse factors that contribute to the industry’s growth. It also outlines potential future impacts on the industry through various segments and applications. The report includes a detailed pricing analysis for different types, manufacturers, regional considerations, and pricing trends.

The SQL Share report delivers an overview of the market’s value structure, cost determinants, and key driving factors. It assesses the industry landscape and subsequently examines the global landscape encompassing industry size, demand, applications, revenue, products, regions, and segments. Moreover, SQL Market report presents the competitive scenario in the market among distributors and manufacturers, encompassing market value assessment and a breakdown of the cost chain structure.

Get a Sample Copy of the SQL Market Report 2023

Global SQL Market Report: Key Insights

  • REGIONAL SHARE: The SQL Market report provides market size data for various regions, including North America, Europe, Asia Pacific, Latin America, the Middle East, and Africa. In 2022, North America dominated the SQL market, followed by Europe, while Asia Pacific held a significant share.
  • SEGMENT OVERVIEW: The market is segmented based on Type and Application. In 2022, Down accounted for a notable percentage of the SQL market, with Feather also contributing significantly. Comforters held a significant share as well, and Apparel played a role in the market.
  • COMPETITIVE LANDSCAPE: The report offers a comprehensive analysis of prominent players who have substantial market shares. It includes information on the concentration ratio and provides detailed insights into the market performance of each player. This allows readers to gain a holistic understanding of the competitive landscape and better knowledge of their competitors.
  • KEY FACTORS CONSIDERED: With the global impact of COVID-19, the report tracks market changes during and after the pandemic. It examines the effects on upstream and downstream market participants, changes in consumer behavior, demand fluctuations, transportation challenges, trade flow adjustments, and other relevant factors.
  • REGIONAL CONFLICTS: The report also addresses the influence of regional conflicts, such as the Russia-Ukraine war, on the market. It discusses how these conflicts have negatively affected the market and provides insights into the expected evolution of the market in the coming years.
  • CHALLENGES AND OPPORTUNITIES: The report highlights factors that could create opportunities and enhance profitability for market players. It also identifies challenges that may hinder or pose a threat to player development. These insights can assist in making strategic decisions and their effective implementation.

SQL Market Report Overview:

The global SQL market size was valued at USD Million in 2022 and will reach USD Million in 2028, with a CAGR during 2022-2028.

SQL is a specific purpose programming language for managing relational database management systems or for stream processing in relational flow data management systems. SQL is based on relational algebra and tuple relationship calculus, including a data definition language and data manipulation language.

The SQL market report covers sufficient and comprehensive data on market introduction, segmentations, status and trends, opportunities and challenges, industry chain, competitive analysis, company profiles, and trade statistics, etc. It provides in-depth and all-scale analysis of each segment of types, applications, players, 5 major regions and sub-division of major countries, and sometimes end user, channel, technology, as well as other information individually tailored before order confirmation.

Meticulous research and analysis were conducted during the preparation process of the report. The qualitative and quantitative data were gained and verified through primary and secondary sources, which include but not limited to Magazines, Press Releases, Paid Databases, Maia Data Center, National Customs, Annual Reports, Public Databases, Expert interviews, etc. Besides, primary sources include extensive interviews of key opinion leaders and industry experts such as experienced front-line staff, directors, CEOs, and marketing executives, downstream distributors, as well as end-clients.

In this report, the historical period starts from 2018 to 2022, and the forecast period ranges from 2023 to 2028. The facts and data are demonstrated by tables, graphs, pie charts, and other pictorial representations, which enhances the effective visual representation and decision-making capabilities for business strategy.

The report provides a forecast of the SQL Market across regions, types, and applications, projecting sales and revenue from 2021 to 2030. It emphasizes SQL Market Share, distribution channels, key suppliers, evolving price trends, and the raw material supply chain. The SQL Market Size report furnishes essential insights into the current industry valuation and presents market segmentation, highlighting growth prospects within this sector.

This report centers on SQL Market manufacturers, analyzing their sales, value, market share, and future development plans. It defines, describes, and predicts SQL Market Growth based on type, application, and region. The goal is to examine global and key regional market potential, advantages, opportunities, challenges, as well as restraints and risks. The report identifies significant trends and factors that drive or hinder SQL Market growth, benefiting stakeholders by pinpointing high-growth segments. Furthermore, the report strategically assesses each submarket’s individual growth trend and its contribution to the overall SQL Market.

Inquire more and share questions if any before the purchase on this report at: https://www.marketreportsworld.com/enquiry/pre-order-enquiry/23901581

What are the types of SQL available in the Market?

  • Text
  • Number
  • Date

What are the factors driving applications of the SQL Market?

  • Retail
  • Online Game Development
  • IT
  • Social Network Development
  • Web Applications Management
  • Others

The Global SQL Market Trends, development and marketing channels are analysed. Finally, the feasibility of new investment projects is assessed and overall research conclusions offered.The global SQL Market Growth is anticipated to rise at a considerable rate during the forecast period, between 2021 and 2028. In 2021, the market was growing at a steady rate and with the rising adoption of strategies by key players, the market is expected to rise over the projected horizon.

TO KNOW HOW COVID-19 PANDEMIC AND RUSSIA UKRAINE WAR WILL IMPACT THIS MARKET – REQUEST A SAMPLE

SQL Market Trend for Development and marketing channels are analysed. Finally, the feasibility of new investment projects is assessed and overall research conclusions offered. SQL Market Report also mentions market share accrued by each product in the SQL market, along with the production growth.

Which regions are leading the SQL Market?

North America (Covered in Chapter 6 and 13)

Europe (Covered in Chapter 7 and 13)

Asia-Pacific (Covered in Chapter 8 and 13)

Middle East and Africa (Covered in Chapter 9 and 13)

South America (Covered in Chapter 10 and 13)

Purchase this report (Price 3480 USD for a single-user license) – https://www.marketreportsworld.com/purchase/23901581      

Reasons to Purchase SQL Market Report?

  • SQL Market Report provides qualitative and quantitative analysis of the market based on segmentation involving both economic as well as non-economic factors.
  • SQL Market report gives outline of market value (USD) data for each segment and sub-segment.
  • This report indicates the region and segment that is expected to witness the fastest growth as well as to dominate the market.
  • SQL Market Analysis by geography highlighting the consumption of the product/service in the region as well as indicating the factors that are affecting the market within each region.
  • Competitive landscape which incorporates the market ranking of the major players, along with new service/product launches, partnerships, business expansions and acquisitions in the past five years of companies profiled.
  • Extensive company profiles comprising of company overview, company insights, product benchmarking and SWOT analysis for the major market players.
  • The current as well as the future market outlook of the industry with respect to recent developments (which involve growth opportunities and drivers as well as challenges and restraints of both emerging as well as developed regions.
  • SQL Market Includes an in-depth analysis of the market of various perspectives through Porter’s five forces analysis also Provides insight into the market through Value Chain.

Detailed TOC of Global SQL Market Research Report, 2023-2030

1 SQL Market Overview
1.1 Market Definition and Product Scope
1.2 Global SQL Market Size and Growth Rate 2018-2028
1.2.1 Global SQL Market Growth or Decline Analysis
1.3 Market Key Segments Introduction
1.3.1 Types of SQL
1.3.2 Applications of SQL
1.4 Market Dynamics
1.4.1 Drivers and Opportunities
1.4.2 Limits and Challenges
1.4.3 Impacts of Global Inflation on SQL Industry

2 Industry Chain Analysis
2.1 SQL Raw Materials Analysis
2.2 SQL Cost Structure Analysis
2.3 Global SQL Average Price Estimate and Forecast (2018-2028)
2.4 Factors Affecting the Price of SQL
2.5 Market Channel Analysis
2.6 Major Downstream Customers Analysis

3 Industry Competitive Analysis
3.1 Market Concentration Ratio and Market Maturity Analysis
3.2 New Entrants Feasibility Analysis
3.3 Substitutes Status and Threats Analysis

4 Company Profiles
4.1 MariaDB
4.1.1 MariaDB Basic Information
4.1.2 Product or Service Characteristics and Specifications
4.1.3 MariaDB SQL Sales, Price, Value, Gross Margin 2018-2023
4.2 Sybase
4.2.1 Sybase Basic Information
4.2.2 Product or Service Characteristics and Specifications
4.2.3 Sybase SQL Sales, Price, Value, Gross Margin 2018-2023
4.3 MySQL
4.3.1 MySQL Basic Information
4.3.2 Product or Service Characteristics and Specifications
4.3.3 MySQL SQL Sales, Price, Value, Gross Margin 2018-2023
4.4 Microsoft
4.4.1 Microsoft Basic Information
4.4.2 Product or Service Characteristics and Specifications
4.4.3 Microsoft SQL Sales, Price, Value, Gross Margin 2018-2023
4.5 PostgreSQL
4.5.1 PostgreSQL Basic Information
4.5.2 Product or Service Characteristics and Specifications
4.5.3 PostgreSQL SQL Sales, Price, Value, Gross Margin 2018-2023
4.6 Oracle Database
4.6.1 Oracle Database Basic Information
4.6.2 Product or Service Characteristics and Specifications
4.6.3 Oracle Database SQL Sales, Price, Value, Gross Margin 2018-2023
4.7 Basho Technologies
4.7.1 Basho Technologies Basic Information
4.7.2 Product or Service Characteristics and Specifications
4.7.3 Basho Technologies SQL Sales, Price, Value, Gross Margin 2018-2023
4.8 MarkLogic Corporation
4.8.1 MarkLogic Corporation Basic Information
4.8.2 Product or Service Characteristics and Specifications
4.8.3 MarkLogic Corporation SQL Sales, Price, Value, Gross Margin 2018-2023
4.9 MongoDB
4.9.1 MongoDB Basic Information
4.9.2 Product or Service Characteristics and Specifications
4.9.3 MongoDB SQL Sales, Price, Value, Gross Margin 2018-2023

5 SQL Market – By Trade Statistics
5.1 Global SQL Export and Import
5.2 United States SQL Export and Import Volume (2018-2023)
5.3 United Kingdom SQL Export and Import Volume (2018-2023)
5.4 China SQL Export and Import Volume (2018-2023)
5.5 Japan SQL Export and Import Volume (2018-2023)
5.6 India SQL Export and Import Volume (2018-2023)

6 North America SQL Market Overview Analysis
6.1 North America SQL Market Development Status (2018-2023)
6.2 United States SQL Market Development Status (2018-2023)
6.3 Canada SQL Market Development Status (2018-2023)
6.4 Mexico SQL Market Development Status (2018-2023)

7 Europe SQL Market Overview Analysis
7.1 Europe SQL Market Development Status (2018-2023)
7.2 Germany SQL Market Development Status (2018-2023)
7.3 United Kingdom SQL Market Development Status (2018-2023)
7.4 France SQL Market Development Status (2018-2023)
7.5 Italy SQL Market Development Status (2018-2023)
7.6 Spain SQL Market Development Status (2018-2023)

8 Asia Pacific SQL Market Overview Analysis
8.1 Asia Pacific SQL Market Development Status (2018-2023)
8.2 China SQL Market Development Status (2018-2023)
8.3 Japan SQL Market Development Status (2018-2023)
8.4 South Korea SQL Market Development Status (2018-2023)
8.5 Southeast Asia SQL Market Development Status (2018-2023)
8.6 India SQL Market Development Status (2018-2023)

9 Middle East and Africa SQL Market Overview Analysis
9.1 Middle East and Africa SQL Market Development Status (2018-2023)
9.2 Saudi Arabia SQL Market Development Status (2018-2023)
9.3 UAE SQL Market Development Status (2018-2023)
9.4 South Africa SQL Market Development Status (2018-2023)

10 South America SQL Market Overview Analysis
10.1 South America SQL Market Development Status (2018-2023)
10.2 Brazil SQL Market Development Status (2018-2023)
10.3 Argentina SQL Market Development Status (2018-2023)

11 SQL Market – By Regions
11.1 Global SQL Sales by Regions (2018-2023)
11.2 Global SQL Value by Regions (2018-2023)
11.3 SQL Value and Growth Rate (2018-2023) by Regions
11.3.1 North America SQL Value and Growth Rate (2018-2023)
11.3.2 Europe SQL Value and Growth Rate (2018-2023)
11.3.3 Asia Pacific SQL Value and Growth Rate (2018-2023)
11.3.4 Middle East and Africa SQL Value and Growth Rate (2018-2023)
11.3.5 South America SQL Value and Growth Rate (2018-2023)

12 SQL Market – By Types
12.1 Global SQL Sales by Types
12.1.1 Global SQL Sales by Types (2018-2023)
12.1.2 Global SQL Sales Market Share by Types (2018-2023)
12.2 Global SQL Value by Types
12.2.1 Global SQL Value by Types (2018-2023)
12.2.2 Global SQL Value Market Share by Types (2018-2023)
12.3 Global SQL Price Trends by Types (2018-2023)
12.4 Text Sales and Price (2018-2023)
12.5 Number Sales and Price (2018-2023)
12.6 Date Sales and Price (2018-2023)

13 SQL Market – By Applications
13.1 Global SQL Sales by Applications
13.1.1 Global SQL Sales by Applications (2018-2023)
13.1.2 Global SQL Sales Market Share by Applications (2018-2023)
13.2 Global SQL Value by Applications
13.2.1 Global SQL Value by Applications (2018-2023)
13.2.2 Global SQL Value Market Share by Applications (2018-2023)
13.3 Retail Sales, Revenue and Growth Rate (2018-2023)
13.4 Online Game Development Sales, Revenue and Growth Rate (2018-2023)
13.5 IT Sales, Revenue and Growth Rate (2018-2023)
13.6 Social Network Development Sales, Revenue and Growth Rate (2018-2023)
13.7 Web Applications Management Sales, Revenue and Growth Rate (2018-2023)
13.8 Others Sales, Revenue and Growth Rate (2018-2023)

14 SQL Market Forecast – By Types and Applications
14.1 Global SQL Market Forecast by Types
14.1.1 Global SQL Sales by Types (2023-2028)
14.1.2 Global SQL Value by Types (2023-2028)
14.1.3 Global SQL Value and Growth Rate by Type (2023-2028)
14.1.4 Global SQL Price Trends by Types (2023-2028)
14.2 Global SQL Market Forecast by Applications
14.2.1 Global SQL Sales by Applications (2023-2028)
14.2.2 Global SQL Value by Applications (2023-2028)
14.2.3 Global SQL Value and Growth Rate by Application (2023-2028)

15 SQL Market Forecast – By Regions and Major Countries
15.1 Global SQL Sales by Regions (2023-2028)
15.2 Global SQL Value by Regions (2023-2028)
15.3 North America SQL Value by Countries (2023-2028)
15.4 Europe SQL Value by Countries (2023-2028)
15.5 Asia Pacific SQL Value by Countries (2023-2028)
15.6 Middle East and Africa SQL Value by Countries (2023-2028)
15.7 South America SQL Value by Countries (2023-2028)

16 Research Methodology and Data Source
16.1 Research Methodology
16.2 Research Data Source
16.2.1 Secondary Data
16.2.2 Primary Data
16.2.3 Legal Disclaimer

Continued

Get a Sample Copy of the SQL Market Report 2023

About Us: –

Market Reports World is the Credible Source for Gaining the Market Reports that will Provide you with the Lead Your Business Needs. Market is changing rapidly with the ongoing expansion of the industry. Advancement in the technology has provided today’s businesses with multifaceted advantages resulting in daily economic shifts. Thus, it is very important for a company to comprehend the patterns of the market movements in order to strategize better. An efficient strategy offers the companies with a head start in planning and an edge over the competitors.

Contact Us:

Market Reports World

Phone: US : +(1) 424 253 0946
UK : +(44) 203 239 8187

Email: [email protected]
Web: https://www.marketreportsworld.com

Find Out Our New Updated Reports Below :

Organic Cheese Market Size 2023 Global Comprehensive Research Study, Trends, Future Plans |Competitive Landscape and Growth by Forecast 2030

Skinner Blade Market Size 2023 Industry Demand, Global Trend, Business Growth, Top Key Players Update |Business Statistics and Research Methodology by Forecast to 2030

White Cement Market Size 2023–Global Industry Analysis, Trends, Market Demand, Growth, Opportunities and Forecast 2030

Lipstick Market Size 2023 Global Growth, Trends, Industry Analysis, Key Players and Forecast to 2030

Marine Canva Market Size 2023 Global Comprehensive Research Study, Trends, Future Plans |Competitive Landscape and Growth by Forecast 2030

Smart Nursery and Monitor Market Size 2023 Global Competitors Strategy, Industry Trends, Segments, Regional Analysis, Review, Key Players Profile |Statistics and Growth to 2030 Analysis

Smoothies Market Size 2023 Industry Demand, Global Trend, Business Growth, Top Key Players Update |Business Statistics and Research Methodology by Forecast to 2030

Shoe Polish Market Size 2023 Industry Demand, Global Trend, Business Growth, Top Key Players Update |Business Statistics and Research Methodology by Forecast to 2030

Wifi Modules Market Size 2023-Global Industry Analysis, Trends, Market Demand, Growth, Opportunities and Forecast 2030

Pet Cat Training Accessories Market Size 2023 Global Competitors Strategy, Industry Trends, Segments, Regional Analysis, Review, Key Players Profile |Statistics and Growth to 2030 Analysis

PRWireCenter

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Everhart Financial Group Inc. Shows Confidence in MongoDB’s Growth Potential through …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Everhart Financial Group Inc., a prominent institutional investor, has recently acquired a significant position in MongoDB, Inc. During the second quarter of this year, the company obtained 714 shares of MongoDB’s stock, with an approximate value of $293,000. This move highlights Everhart Financial Group Inc.’s confidence in the future prospects and potential growth of MongoDB.

MongoDB, Inc. is a leading provider of a general-purpose database platform on a global scale. The company’s flagship offerings include MongoDB Atlas, which is a hosted multi-cloud database-as-a-service solution. With this platform, customers can easily leverage the benefits of cloud computing and manage their databases efficiently.

Another key offering from MongoDB is the MongoDB Enterprise Advanced, specifically designed for enterprise-level customers. This commercial database server allows these customers to operate their databases either in the cloud, on-premises or in a hybrid environment. By providing flexibility and scalability options, this product enables businesses to optimize their operations based on their unique requirements.

Additionally, MongoDB also offers Community Server, which is a free-to-download version of its database platform. It provides developers with all the essential functionalities they need to commence their projects using MongoDB technology smoothly.

The acquisition by Everhart Financial Group Inc. indicates their recognition of the potential advantages and investment value that come with MongoDB’s products and services. As more organizations transition towards cloud-based solutions and recognize the critical importance of efficient data management systems, MongoDB stands at the forefront with its robust and versatile offerings.

By investing in MongoDB at this stage, Everhart Financial Group Inc. has positioned itself to capitalize on the anticipated growth opportunities within the database market. With an increasing demand for reliable data management infrastructures across various industries such as finance, healthcare, e-commerce, and technology sectors; MongoDB is well-equipped to cater to these evolving needs.

This development underscores how institutional investors like Everhart Financial Group Inc. are closely monitoring promising companies like MongoDB and exhibiting confidence in their ability to deliver strong financial performance. As a result, such endorsement has the potential to attract more investors and boost MongoDB’s reputation in the market.

As of now, it is essential to keep in mind that investing in any stock carries inherent risks. Therefore, prospective investors should conduct thorough due diligence and consult with financial advisors before making any investment decisions. In this rapidly evolving business landscape, past performance may not necessarily guarantee future success.

In conclusion, Everhart Financial Group Inc.’s recent acquisition of a significant stake in MongoDB, Inc. highlights their positive outlook on the company’s potential for growth and success. MongoDB’s innovative database platform solutions position it well within the thriving data management market. As businesses increasingly rely on efficient data storage and retrieval systems, MongoDB stands ready to cater to these emerging needs effectively.

Autodesk, Inc.

ADSK

Strong Buy

Updated on: 22/09/2023

Price Target

Current $204.06

Concensus $255.88


Low $220.00

Median $250.00

High $325.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Daniel Jester
Loop Capital Markets
Buy
Ken Wong
Oppenheimer
Buy
Clarke Jeffries
Piper Sandler
Buy
Berenberg Bank Buy
Piper Sandler Buy

Show more

MongoDB Sees Changes in Investor Landscape and Provides Positive Analyst Reports


September 21, 2023 – MongoDB, Inc., a global provider of general-purpose database platforms, has seen recent changes in its investor landscape. Institutional investors and hedge funds have made noteworthy alterations to their positions in the company. Moody National Bank Trust Division increased its stake in MongoDB by 2.9% during the second quarter, acquiring an additional 38 shares and now owning 1,346 shares worth $553,000. CWM LLC also boosted its holdings by 2.4% during the first quarter, acquiring an additional 52 shares and bringing its total to 2,235 shares worth $521,000.

First Horizon Advisors Inc. witnessed a growth of 29.5% in its MongoDB holdings during the first quarter with an addition of 52 shares valued at $53,000. Similarly, Bleakley Financial Group LLC saw a rise of 5.3% in its holdings during the same period with an acquisition of an extra 58 shares valued at $267,000. Cetera Advisor Networks LLC also participated in increasing MongoDB’s holdings by 7.4% during the second quarter with an additional purchase of 59 shares valued at $223,000.

Overall, institutional investors and hedge funds now own approximately 88.89% of MongoDB’s stock.

Various analysts have recently provided reports on MongoDB stock and issued price targets for investors to consider. Canaccord Genuity Group raised their price target from $410 to $450 and reiterated a “buy” rating on Tuesday, September 5th. Citigroup also increased their price objective from $430 to $455 and maintained a “buy” rating on Monday, August 28th.

Argus raised their price objective from $435 to $484 and kept a “buy” rating on Tuesday, September 5th as well. Macquarie increased their price objective from $434 to $456 on Friday, September 1st. Stifel Nicolaus also raised their price objective from $420 to $450 on Friday, September 1st, rating MongoDB as a “buy”.

According to data from Bloomberg.com, a consensus rating of “Moderate Buy” has been given to MongoDB with an average price target of $418.08.

In other news, the Chief Technology Officer (CTO) of MongoDB, Mark Porter, sold 2,734 shares of the company’s stock in a transaction on Monday, July 3rd. The shares were sold at an average price of $412.33 for a total transaction amounting to $1,127,310.22. Following this sale, Porter now owns 35,056 shares valued at approximately $14,454,640.48.

Furthermore, Director Hope F. Cochran sold 2,174 shares on Friday, September 15th at an average price of $361.31 per share for a total transaction value of $785,487.94. Following the completion of this sale, Cochran now holds 9,722 shares valued around $3,512,655.82.

Overall in the last 90 days Insider data reveals that insiders have sold approximately 104,694 shares worth $41

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


HTAP-Enabling In-Memory Computing Technologies Market 2023 Growth Drivers and Future Outlook

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Global HTAP-Enabling In-Memory Computing Technologies Market Growth (Status and Outlook) 2022-2028

On Orbisresearch.com, the most recent report, Global “HTAP-Enabling In-Memory Computing Technologies” Market Trends and Insights, is now available.

1. Highlights of Report: The highlights section offers a succinct overview of the essential findings and insights presented in the report. It encapsulates the report’s focal points, including key trends, challenges, opportunities, and strategic recommendations, providing readers with a quick understanding of the report’s contents.

2. COVID-19 Impact: This section delves into the significant impact of the COVID-19 pandemic on the HTAP-Enabling In-Memory Computing Technologies market. It evaluates the disruptions caused by the pandemic, such as supply chain interruptions, shifts in consumer behaviour, and changes in market demand. The analysis also considers the strategies employed by businesses to adapt, such as digital transformation and diversification of supply sources. The insights from this section enable businesses to understand the pandemic’s implications and develop responsive strategies.

3. Market Competitive Analysis: The competitive analysis section examines the intricate landscape of the HTAP-Enabling In-Memory Computing Technologies market. It assesses key players, their market positioning, strengths, weaknesses, and strategic approaches. Through this analysis, businesses can identify potential partners, areas of differentiation, and strategies to enhance their market standing. The section also discusses factors contributing to competitive advantage, such as product innovation and customer engagement.

   Request a pdf sample report : https://www.orbisresearch.com/contacts/request-sample/6701306

4. Why Choose This Report? This section emphasizes the distinctive attributes that set this research report apart:

·        Comprehensive Insights: The report offers a holistic perspective of the HTAP-Enabling In-Memory Computing Technologies market, incorporating elements such as the COVID-19 impact and competitive analysis.

·        Data-driven Analysis: Insights are derived from a blend of qualitative and quantitative research methodologies, ensuring accuracy and reliability.

·        Strategic Decision-making: The insights provided enable businesses to make well-informed strategic decisions that align with the dynamic HTAP-Enabling In-Memory Computing Technologies market landscape.

·        Risk Mitigation: By understanding COVID-19 impact and competitive dynamics, businesses can develop strategies to mitigate risks and thrive amidst challenges.

·        Market Opportunities: The report identifies untapped growth prospects within the HTAP-Enabling In-Memory Computing Technologies market, empowering businesses to seize potential avenues for expansion.

    Top Players in the HTAP-Enabling In-Memory Computing Technologies market report
Microsoft
IBM
MongoDB
SAP
DataStax
Aerospike
GridGain
  

5. FAQs: The FAQs section addresses common queries readers may have regarding the HTAP-Enabling In-Memory Computing Technologies market and the report itself. This section offers concise answers to key questions, shedding light on various aspects of the market and the insights gleaned from the report.

5.1 What are the prevailing trends shaping the HTAP-Enabling In-Memory Computing Technologies market? Trends include sustainability focus, evolving consumer demands for quality products, and technological advancements leading to innovation.

5.2 How did the COVID-19 pandemic impact the HTAP-Enabling In-Memory Computing Technologies market? The pandemic disrupted supply chains, altered consumer behaviour, and influenced market demand for HTAP-Enabling In-Memory Computing Technologies products. Businesses adapted through digital strategies and diversified supply sources.

Buy the report at https://www.orbisresearch.com/contact/purchase-single-user/6701306

HTAP-Enabling In-Memory Computing Technologies Market Segmentation:

HTAP-Enabling In-Memory Computing Technologies Market by Types:

Cloud-Based
On-Premises

HTAP-Enabling In-Memory Computing Technologies Market by Applications:

Large Enterprises(1000+ Users)
Medium-Sized Enterprise(499-1000 Users)
Small Enterprises(1-499 Users)  

5.3 Who are the prominent players in the HTAP-Enabling In-Memory Computing Technologies market? Key players vary in market share, product offerings, and geographic reach. Understanding the competitive landscape is pivotal for strategic decision-making.

5.4 What benefits does this report offer for HTAP-Enabling In-Memory Computing Technologies market businesses? The report provides comprehensive insights into COVID-19 impact, competitive analysis, and growth opportunities, empowering businesses to navigate the market adeptly.

5.5 How can businesses leverage insights from this report? Insights can aid businesses in identifying strategic opportunities, data-driven decision-making, tailored strategy formulation, and strengthening their competitive edge.

5.6 What is the projected future outlook for the HTAP-Enabling In-Memory Computing Technologies market? The report outlines the HTAP-Enabling In-Memory Computing Technologies market’s future outlook, considering emerging trends, growth projections, and potential challenges that businesses should be prepared to address.

 

6. Market Size and Growth Projection: An analysis of the market’s size and growth projection is essential for businesses to assess the market’s potential. This section utilizes historical data, industry trends, and growth patterns to provide an informed estimate of the HTAP-Enabling In-Memory Computing Technologies market’s size and its projected growth trajectory.

7. Customer Segmentation and Targeting: Understanding customer segments and effectively targeting them is critical for success in the HTAP-Enabling In-Memory Computing Technologies market. This section delves into the various customer segments within the market and discusses strategies for tailoring products, services, and marketing efforts to meet their specific needs and preferences.

8. Regulatory Landscape and Compliance: The regulatory landscape has a significant impact on the operations of businesses within the HTAP-Enabling In-Memory Computing Technologies market. This section examines the regulatory environment, including relevant policies, standards, and compliance requirements that market players must navigate to ensure legal and ethical operations.

9. Technological Advancements and Innovation: Innovation is a driving force in the HTAP-Enabling In-Memory Computing Technologies market. This section explores technological advancements such as automation, AI, and blockchain that are transforming the way businesses operate within the market. It discusses the potential of these technologies to enhance efficiency, streamline processes, and create new opportunities.

 
   Do Inquiry before Accessing Report at: https://www.orbisresearch.com/contacts/enquiry-before-buying/6701306      

10. Sustainability and Environmental Considerations: Sustainability has become a focal point for businesses across industries, including HTAP-Enabling In-Memory Computing Technologies. This section assesses the industry’s efforts to adopt sustainable practices, reduce environmental impact, and cater to eco-conscious consumers. It explores how businesses can align with sustainability goals while delivering value to customers.

11. Future Outlook and Anticipated Trends: Anticipating future trends is essential for long-term success. This section provides a forward-looking perspective on the HTAP-Enabling In-Memory Computing Technologies market, considering emerging technologies, changing consumer behaviors, and regulatory shifts. It offers insights into how businesses can prepare for and adapt to these trends.

 About Us:

Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have a vast database of reports from leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us:

Hector Costello
Senior Manager – Client Engagements
4144N Central Expressway,
Suite 600, Dallas,
Texas – 75204, U.S.A.
Phone No.: USA: +1 (972)-591-8191 | IND: +91 895 659 5155
Email ID: sales@orbisresearch.com   

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.