Nvidia Unveils AI, GPU, and Quantum Computing Innovations at GTC 2025

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

Nvidia presented a range of new technologies at its GTC 2025 event, focusing on advancements in GPUs, AI infrastructure, robotics, and quantum computing. The company introduced the GeForce RTX 5090, a graphics card built on the Blackwell architecture, featuring improvements in energy efficiency, size reduction, and AI-assisted rendering capabilities. Nvidia highlighted the increasing role of AI in real-time, path-traced rendering and GPU performance optimization.

In the data center sector, Nvidia announced the Blackwell Ultra GB300 family of GPUs, designed to enhance AI inference efficiency with 1.5 times the memory capacity of previous models. The company also introduced MVLink, a high-speed interconnect technology that enables faster GPU communication, and Nvidia Dynamo, an AI data center operating system aimed at improving management and efficiency. The DGX Station, a computing platform for AI workloads, was also unveiled to support enterprise AI development.

Nvidia’s automotive division revealed a partnership with General Motors to develop AI-powered self-driving vehicles. The company stated that all software components involved in the project have undergone rigorous safety assessments. Additionally, Nvidia introduced Halos, an AI-powered safety system for autonomous vehicles, integrating hardware, software, and AI-based decision-making.

In robotics, Nvidia introduced the Isaac GR00T N1, an open-source humanoid reasoning model developed in collaboration with Google DeepMind and Disney Research. The Newton physics engine, also open-source, was announced to enhance robotics training by simulating real-world physics for AI-driven robots. Nvidia also expanded its Omniverse platform for physical AI applications with the launch of Cosmos, a generative model aimed at improving AI-driven world simulation and interaction.

The company announced new AI models under the Llama Nemotron family, designed for reasoning-based AI agents. These models are optimized for enterprises looking to deploy AI agents that can work autonomously or collaboratively. Nvidia stated that members of the Nvidia Developer Program could access Llama Nemotron for development, testing, and research.

Nvidia revealed its latest advancements in quantum computing, including the launch of the Nvidia Accelerated Quantum Research Center in Boston. The company is collaborating with Harvard and MIT on quantum computing initiatives. During the conference, Nvidia hosted Quantum Day, where CEO Jensen Huang discussed the evolving role of quantum computing and acknowledged previous underestimations of its development timeline. 

The company introduced a roadmap for future GPUs, including the Vera Rubin GPU, set for release in 2026, followed by Rubin Ultra NVL576 in 2027, which is projected to deliver 15 exaflops of computing power. Nvidia also announced the Feynman GPU, scheduled for 2028, designed to advance AI workloads with enhanced memory and performance capabilities.

Following the conference, online discussions reflected various perspectives on Nvidia’s announcements. Many users expressed interest in the concept of physical AI and its potential applications. 

AI expert Armughan Ahmad shared:

AI is shifting from simple chat assistants to autonomous agents that execute work on our behalf.

While AI strategist Vivi Linsi commented:

With the arrival of agentic and physical AI, AI has developed from talking to Doing – are you ready?

Nvidia’s announcements at GTC 2025 highlight the company’s continued investment in AI, data centers, robotics, and quantum computing, positioning itself at the forefront of next-generation computing infrastructure.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Sei Investments Co. Sells 142,830 Shares of MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Sei Investments Co. lessened its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 56.9% during the 4th quarter, according to the company in its most recent disclosure with the SEC. The institutional investor owned 108,209 shares of the company’s stock after selling 142,830 shares during the quarter. Sei Investments Co. owned approximately 0.15% of MongoDB worth $25,193,000 at the end of the most recent reporting period.

A number of other institutional investors have also recently added to or reduced their stakes in MDB. Intech Investment Management LLC raised its holdings in MongoDB by 10.7% in the 3rd quarter. Intech Investment Management LLC now owns 5,205 shares of the company’s stock worth $1,407,000 after acquiring an additional 502 shares during the period. Charles Schwab Investment Management Inc. increased its stake in shares of MongoDB by 2.8% in the third quarter. Charles Schwab Investment Management Inc. now owns 278,419 shares of the company’s stock worth $75,271,000 after purchasing an additional 7,575 shares during the period. Cerity Partners LLC lifted its position in shares of MongoDB by 8.3% during the 3rd quarter. Cerity Partners LLC now owns 9,094 shares of the company’s stock worth $2,459,000 after purchasing an additional 695 shares during the last quarter. Daiwa Securities Group Inc. boosted its stake in MongoDB by 12.3% during the 3rd quarter. Daiwa Securities Group Inc. now owns 10,323 shares of the company’s stock valued at $2,791,000 after purchasing an additional 1,132 shares during the period. Finally, Independent Advisor Alliance grew its holdings in MongoDB by 5.3% in the 3rd quarter. Independent Advisor Alliance now owns 1,516 shares of the company’s stock valued at $410,000 after buying an additional 76 shares during the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Performance

Shares of MDB stock opened at $193.66 on Thursday. The stock’s fifty day moving average is $248.25 and its 200-day moving average is $267.64. MongoDB, Inc. has a 52 week low of $173.13 and a 52 week high of $387.19. The stock has a market capitalization of $14.42 billion, a PE ratio of -70.68 and a beta of 1.30.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $548.40 million during the quarter, compared to analyst estimates of $519.65 million. During the same quarter in the previous year, the business posted $0.86 EPS. As a group, analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

Insider Buying and Selling

In related news, insider Cedric Pech sold 287 shares of MongoDB stock in a transaction dated Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total value of $67,183.83. Following the completion of the sale, the insider now directly owns 24,390 shares in the company, valued at $5,709,455.10. This trade represents a 1.16 % decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, CAO Thomas Bull sold 169 shares of the company’s stock in a transaction dated Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $39,561.21. Following the transaction, the chief accounting officer now owns 14,899 shares in the company, valued at approximately $3,487,706.91. The trade was a 1.12 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 43,139 shares of company stock worth $11,328,869 in the last quarter. Insiders own 3.60% of the company’s stock.

Wall Street Analysts Forecast Growth

A number of equities analysts recently issued reports on MDB shares. Truist Financial dropped their target price on shares of MongoDB from $400.00 to $300.00 and set a “buy” rating on the stock in a research report on Thursday, March 6th. UBS Group set a $350.00 price objective on shares of MongoDB in a research report on Tuesday, March 4th. Guggenheim raised MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 target price on the stock in a research report on Monday, January 6th. Wells Fargo & Company cut MongoDB from an “overweight” rating to an “equal weight” rating and cut their price target for the company from $365.00 to $225.00 in a report on Thursday, March 6th. Finally, Tigress Financial lifted their price objective on MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a report on Wednesday, December 18th. Seven research analysts have rated the stock with a hold rating and twenty-three have assigned a buy rating to the company. According to data from MarketBeat, the stock currently has a consensus rating of “Moderate Buy” and a consensus target price of $320.70.

Read Our Latest Analysis on MDB

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The 10 Best AI Stocks to Own in 2025 Cover

Wondering where to start (or end) with AI stocks? These 10 simple stocks can help investors build long-term wealth as artificial intelligence continues to grow into the future.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Quantbot Technologies LP Makes New Investment in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Quantbot Technologies LP acquired a new stake in MongoDB, Inc. (NASDAQ:MDBFree Report) in the 4th quarter, according to its most recent Form 13F filing with the Securities & Exchange Commission. The institutional investor acquired 3,016 shares of the company’s stock, valued at approximately $702,000.

Several other hedge funds and other institutional investors have also modified their holdings of MDB. Hilltop National Bank raised its stake in MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after purchasing an additional 42 shares during the last quarter. Brooklyn Investment Group acquired a new position in shares of MongoDB during the third quarter valued at about $36,000. Continuum Advisory LLC lifted its stake in shares of MongoDB by 621.1% in the third quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after buying an additional 118 shares during the period. NCP Inc. acquired a new stake in MongoDB during the fourth quarter worth about $35,000. Finally, Wilmington Savings Fund Society FSB bought a new position in MongoDB during the third quarter valued at about $44,000. Institutional investors and hedge funds own 89.29% of the company’s stock.

Analyst Upgrades and Downgrades

MDB has been the topic of a number of research reports. Guggenheim upgraded MongoDB from a “neutral” rating to a “buy” rating and set a $300.00 price objective on the stock in a research report on Monday, January 6th. DA Davidson boosted their price target on shares of MongoDB from $340.00 to $405.00 and gave the company a “buy” rating in a report on Tuesday, December 10th. JMP Securities reaffirmed a “market outperform” rating and set a $380.00 price objective on shares of MongoDB in a report on Wednesday, December 11th. KeyCorp cut shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th. Finally, Rosenblatt Securities restated a “buy” rating and set a $350.00 price target on shares of MongoDB in a research note on Tuesday, March 4th. Seven equities research analysts have rated the stock with a hold rating and twenty-three have given a buy rating to the company. According to MarketBeat.com, MongoDB has an average rating of “Moderate Buy” and a consensus target price of $320.70.

View Our Latest Report on MDB

Insider Activity at MongoDB

In other news, CEO Dev Ittycheria sold 2,581 shares of the company’s stock in a transaction dated Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total value of $604,186.29. Following the completion of the sale, the chief executive officer now directly owns 217,294 shares of the company’s stock, valued at $50,866,352.46. This represents a 1.17 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which is available through this hyperlink. Also, CAO Thomas Bull sold 169 shares of MongoDB stock in a transaction that occurred on Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $39,561.21. Following the transaction, the chief accounting officer now directly owns 14,899 shares in the company, valued at $3,487,706.91. The trade was a 1.12 % decrease in their position. The disclosure for this sale can be found here. Insiders have sold 43,139 shares of company stock worth $11,328,869 over the last ninety days. Company insiders own 3.60% of the company’s stock.

MongoDB Stock Down 2.5 %

MDB stock opened at $193.66 on Thursday. MongoDB, Inc. has a twelve month low of $173.13 and a twelve month high of $387.19. The company has a market cap of $14.42 billion, a P/E ratio of -70.68 and a beta of 1.30. The business has a fifty day moving average of $248.25 and a 200-day moving average of $267.64.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). The firm had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same period in the prior year, the firm posted $0.86 earnings per share. As a group, equities research analysts anticipate that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Top Nuclear Stocks To Buy Now Cover

Nuclear energy stocks are roaring. It’s the hottest energy sector of the year. Cameco Corp, Paladin Energy, and BWX Technologies were all up more than 40% in 2024. The biggest market moves could still be ahead of us, and there are seven nuclear energy stocks that could rise much higher in the next several months. To unlock these tickers, enter your email address below.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Palantir (PLTR) and MongoDB (MDB) Defy Software Sector Downtrend – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

According to Morgan Stanley, the overall performance of the U.S. software sector has deteriorated in the fourth quarter. However, infrastructure software companies like Palantir (PLTR, Financial) and MongoDB (MDB) have bucked the trend, achieving strong revenue growth for the second consecutive quarter.

Analysts led by Keith Weiss at Morgan Stanley reported a decline in revenue, operating margins, and earnings per share across the software industry. The percentage of companies exceeding market expectations by 1% dropped, and the median extent of exceeding expectations turned negative after two quarters of improvement, falling below historical averages. For instance, 64% of companies exceeded revenue expectations by more than 1%, down from 71% in the previous quarter. Regarding earnings per share, 69% surpassed market expectations by $0.02 or more, compared to 71% previously.

In contrast, infrastructure software companies have managed to avoid this downward trend, showing robust revenue performance for the second quarter in a row. SolarWinds (SWI) exceeded market expectations by 7.2%, Palantir by 6.6%, MongoDB by 5.6%, and Elastic (ESTC) by 4.5%. Notably, 80% of infrastructure software companies exceeded revenue expectations by more than 1%, and 85% surpassed earnings per share expectations by more than $0.02.

Weiss highlighted Palantir as the second-best performing stock in the infrastructure software sector, despite high expectations. The company has seen accelerated revenue growth for six consecutive quarters, indicating its potential for sustained growth. Palantir’s stock has risen by 22% this year, while the Nasdaq has declined by over 7% during the same period.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Top 10 Big Data Certifications in 2025 – Analytics Insight

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Big data plays a crucial role in various industries. For professionals looking for career growth, can benefit from certifications that validate expertise. Here are the top 10 big data certifications in 2025.

1. IBM Data Science Professional Certificate

This certificate aids in understanding the basics of data science, machine learning, and data visualization without any previous exposure to data or knowledge.

2. Cloudera CDP Certification Program

Cloudera offers certification to users of the Cloudera Data Platform (CDP). The tests assess both general and administrator mastery.

3. Certified Analytics Professional (CAP)

This certification covers analytics problem framing, model building and data handling. Great for those wanting to polish their expertise in analytics.

4. SAS Certified Data Scientist

SAS offers a comprehensive program that covers machine learning, AI and data curation. It also requires passing multiple exams.

5. Data Science Council of America (DASCA) Certifications

The Data Science Council of America (DASCA) presents different certifications applicable to engineers, analysts and scientists in big data domains. The certification credentials exist at both beginning and advanced stages of professional career levels.

6. MongoDB Professional Certification

MongoDB delivers professional exam programs that validate developers and database administrators who work exclusively with NoSQL database technology. The certifications serve as proof of competence in managing big data projects.

7. Dell EMC Data Scientist Certifications

Dell EMC offers certifications covering data science, big data analytics and data engineering. Practical skills related to advanced analytics form the core content focus of these exams.

8. Microsoft Azure Data Scientist Associate

The certification evaluates skilled individuals who operate Azure Machine Learning as well as Databricks platforms. The certification serves employees who use cloud-based data solutions in their work.

9. Open Certified Data Scientist

The certification program has unique requirements distinct from standard test exams. The assessment requires both documentation of abilities from candidates through written submissions as well as feedback from their peers.

10. Columbia University Data Science Certificate

Columbia University delivers non-degree training for data science core knowledge acquisition. It includes coursework in machine learning and data analysis.

Conclusion

Big data certifications help professionals validate their expertise. They also open doors to better job opportunities. Choosing the right certification depends on career goals and industry requirements.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GitHub Leverages AI for More Accurate Code Secret Scanning

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

GitHub has launched an AI-powered secret scanning feature within Copilot, integrated into GitHub Secret Protection, that leverages context analysis to improve the detection of leaked passwords in code significantly. This new approach addresses the shortcomings of traditional regular expression-based methods, which often miss varied password structures and generate numerous false positives.

According to a GitHub blog post detailing the development, the system now analyzes the usage and location of potential secrets to reduce irrelevant alerts and provide more accurate notifications critical to repository security. Sorin Moga, a senior software engineer at Sensis, commented on LinkedIn that this marks a new era in platform security, where AI not only assists in development, but also safeguards code integrity.

A key challenge identified during the private preview of GitHub’s AI-powered secret scanning was its struggle with unconventional file types and structures, highlighting the limitations of relying solely on the large language model’s (LLM) initial training data. GitHub’s initial approach involved “few-shot prompting” with GPT-3.5-Turbo, where the model was provided with examples to guide detection.

To address these early challenges, GitHub significantly enhanced its offline evaluation framework by incorporating feedback from private preview participants to diversify test cases and leveraging the GitHub Code Security team’s evaluation processes to build a more robust data collection pipeline. They even used GPT-4 to generate new test cases based on learnings from existing secret scanning alerts in open-source repositories. This improved evaluation allowed for better measurement of precision (reducing false positives) and recall (reducing false negatives).

GitHub experimented with various techniques to improve detection quality, including trying different LLM models (like GPT-4 as a confirming scanner), repeated prompting (“voting”), and diverse prompting strategies. Ultimately, they collaborated with Microsoft, adopting their MetaReflection technique, a form of offline reinforcement learning that blends Chain of Thought (CoT) and few-shot prompting to enhance precision.

As stated in the GitHub blog post:

We ultimately ended up using a combination of all these techniques and moved Copilot secret scanning into public preview, opening it widely to all GitHub Secret Protection customers.

To further validate these improvements and gain confidence for general availability, GitHub implemented a “mirror testing” framework. This involved testing prompt and filtering changes on a subset of repositories from the public preview. By rescanning these repositories with the latest improvements, GitHub could assess the impact on real alert volumes and false positive resolutions without affecting users.

This testing revealed a significant drop in both detections and false positives, with minimal impact on finding actual passwords, including a 94% reduction in false positives in some cases. The blog post concludes that:

This before-and-after comparison indicated that all the different changes we made during private and public preview led to increased precision without sacrificing recall, and that we were ready to provide a reliable and efficient detection mechanism to all GitHub Secret Protection customers.

The lessons learned during this development include prioritizing accuracy, using diverse test cases based on user feedback, managing resources effectively, and fostering collaboration. These learnings are also being applied to Copilot Autofix. Since the general availability launch, Copilot secret scanning has been part of security configurations, allowing users to manage which repositories are scanned.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


End of the Road for FaunaDB: Is the Future Open Source?

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

The team behind the distributed serverless database Fauna has recently announced plans to shut down the service by the end of May. While the managed database will be terminated soon and all customers will have to migrate to other platforms, Fauna is committing to releasing an open source version of the core database technology alongside the existing drivers and CLI tooling.

Started in 2011 as FaunaDB by the team that scaled Twitter by building its in-house databases and systems, Fauna tried for many years to combine the power of a relational database with the flexibility of JSON documents. Fauna was designed to scale horizontally within a data center to maximize throughput while easily spanning globally distributed sites, ensuring reliability and local performance. With the vision of enabling “applications without database limits” and claiming use by more than 80,000 development teams, the service has now reached the end of the road.

According to the Fauna Service End of Life FAQ, the Fauna service will be turned off on May 30th, and all Fauna accounts will be deleted. The team writes:

Driving broad based adoption of a new operational database that runs as a service globally is very capital intensive. In the current market environment, our board and investors have determined that it is not possible to raise the capital needed to achieve that goal independently.

Yan Cui, AWS Serverless Hero and serverless expert, writes:

Sad to see Fauna go. They were one of the first truly serverless databases on the market.

Ankur Raina, senior staff sales engineer at Cockroach Labs, summarizes:

The DB market is brutal (…) Getting large customers on Serverless databases is hard. (…) Fauna was trying to build the document model of MongoDB, consistency & geo distribution of CockroachDB but without any ability to run it beyond two cloud providers.

The sunsetting of a once-popular database has sparked many reactions within the community. In a popular Hacker News thread, Pier Bover, founder of Waveki, writes:

A decade ago it seemed that edge computing, serverless, and distributed data was the future. Fauna made a lot of sense in that vision. But in these years since, experimenting with edge stuff, I’ve learned that most data doesn’t really need to be distributed. You don’t need such a sophisticated solution to cache a subset of data for reads in a CDN or some KV. What I’m saying is that, probably, Cloudflare Workers KV and similar services killed Fauna.

User strobe adds:

I found Fauna very interesting from a technical perspective many years ago, but even then, the idea of a fully proprietary cloud database with no reasonable migration options seemed pretty crazy at the time. (…) Hope that something useful will be open sourced as a result.

Peter Zaitsev, open source advocate, questions instead:

While there is no alternative history, I wonder what would have happened if Fauna had chosen to start as Open Source, become 100x more widely adopted but monetize a smaller portion of their customers due to “competing” Open Source alternatives.

The market of distributed databases that once competed with Fauna includes Google Spanner, PlanetScale, CockroachDB, and TiDB, among others. A migration guide is now available, offering the option to create snapshot exports, with the exported data stored as JSON files in an AWS S3 bucket. For smaller collections, data can also be exported using FQL queries.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Tessell’s Multi-Cloud DBaaS is Now Available on Google Cloud

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Tessell, the leading next-generation, multi-cloud database-as-a-service (DBaaS), is announcing that the Tessell DBaaS is now available in the Google Cloud Marketplace, accompanied by support for Oracle, PostgreSQL, SQL Server, MySQL, MongoDB, and Milvus on all four major cloud platforms—Azure, AWS, Google Cloud, and OCI. With this launch, Tessell empowers enterprises to modernize their transactional applications, database estates, and data architectures all within Google Cloud’s infrastructure. 

This announcement taps into the recent collaboration between Oracle and Google Cloud, which brought support for Oracle databases on Google Cloud infrastructure. Building off of this opportunity for innovation in cloud-based data management, Tessell delivers a fully managed solution for streamlining the complexities of managing multiple data ecosystems at once, according to the company. 

“Tessell’s support for Oracle, PostgreSQL, SQL Server, MySQL, MongoDB, and Milvus on Google Cloud empowers enterprises to capitalize on the newly available opportunity to bring application workloads to Google Cloud GCP,” said Bala Kuchibhotla, co-founder and CEO at Tessell. “Tessell has already seen rapid adoption of its fully managed database service on Google Cloud, with customers successfully running mission-critical workloads. Organizations are leveraging the platform to simplify operations, improve scalability, and accelerate cloud adoption without the complexities traditionally associated with database management. As more enterprises recognize the benefits of this streamlined approach, Tessell looks forward to expanding its footprint and supporting even more businesses in their cloud transformation journey.”

Tessell’s fully managed service offers the following advantages:

  • Automated maintenance, including for patching, backup, and recovery, which helps reduce downtime and improve reliability
  • High availability and disaster recovery with built-in multi-zone availability and cross-region recovery to ensure business continuity 
  • Data security and compliance with adaptable backup options and strong recovery mechanisms that adhere to strict compliance and regulatory policies
  • Enterprise-grade flexibility by enabling the automation and security benefits of PaaS with the customization features of IaaS
  • Unified security and compliance posture, allowing enterprises to extend their existing security and compliance services to Google Cloud while bringing their own keys 

“Bringing Tessell DBaaS to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the managed database service on Google Cloud’s trusted, global infrastructure,” said Dai Vu, managing director, marketplace and ISV GTM programs at Google Cloud. “Tessell can now securely scale and support customers on their digital transformation journeys.”

“Tessell’s deep database expertise, customer-first approach, and solution-focused mindset made our cloud migration seamless,” said Martti Kontula, head of OT and data at Landis+Gyr. “Their ability to optimize and manage database workloads on Google Cloud ensured a smooth transition. The Tessell platform delivers a powerful, intuitive experience, providing full visibility into database health and performance at a glance. For any enterprise seeking to run databases efficiently in the cloud, Tessell is the ideal choice.”

To learn more about Tessell, please visit https://www.tessell.com/.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Valkey 8.1’s Performance Gains Disrupt In-Memory Databases – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Databases / Open Source“><meta name="x-tns-authors" content="“>


Valkey 8.1’s Performance Gains Disrupt In-Memory Databases – The New Stack


<!– –>

As a JavaScript developer, what non-React tools do you use most often?

Angular

0%

Astro

0%

Svelte

0%

Vue.js

0%

Other

0%

I only use React

0%

I don’t use JavaScript

0%

2025-03-25 06:06:04

Valkey 8.1’s Performance Gains Disrupt In-Memory Databases


Databases

/


Open Source

Redis fork Valkey, with a new multithreading architecture, delivers a threefold improvement in speed and memory efficiency gains.


Mar 25th, 2025 6:06am by


Featued image for: Valkey 8.1’s Performance Gains Disrupt In-Memory Databases

NAPA, Calif — A year ago, Redis announced that it was dumping the open source BSD 3-clause license for its Redis in-memory key-value database and was moving it to a “source-available” Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1).

That move went over like a lead brick with many Redis developers and users. So the disgruntled developers forked a new project, Valkey, as “an open source alternative to the Redis in-memory NoSQL data store.” Now it’s become clear that this has become a remarkably successful fork. 

How successful? According to a Percona research paper, “75% of surveyed Redis users are considering migration due to recent licensing changes. … Of those considering migration, more than 75% are testing, considering, or have adopted Valkey.” Perhaps a more telling point is that third-party Redis developer companies, such as Redisson, are supporting both Redis and Valkey.

Multithreading and Scalability

It’s not just the licensing changes that make Valkey attractive, though. At the Linux Foundation Member SummitMadelyn Olson, a principal software engineer at Amazon Web Services (AWS) and Valkey project maintainer said in her keynote speech that Valkey is far faster thanks to incorporating enhanced multithreading and scalability features. 

That, Olson added, was not the original plan. “We wanted to keep the open source spirit of the Redis project alive, but we also wanted the value to be more than just a fork. We organized a contributor summit in Seattle where we got together developers and users to try to figure out what this new project should look like. At the time, I was really expecting us to just focus on caching, the main workload that Redis open source was serving. What we heard from our users is that they wanted so much more. They wanted Valkey to be a high-performance database for all sorts of distributed workloads. And so although that would add a lot of complexity to the project, the new core team sort of took on that mantle, and we tried to build that for our community.”

They were successful. By August of 2024, Dirk Hohndel, a Linux kernel developer and long-time open source leader, said the Valkey 8.0 redesign of Redis’s single-threaded event loop threading model with a more sophisticated multithreaded approach to I/O operations had given him “roughly a threefold improvement in performance, and I stream a lot of data, 60 million data points a day.” In addition, with Valkey 8, he saw about a “20% reduction in the size of separate cache tables. When you’re talking about terabytes and more on Amazon Web Services, that’s a real savings in size and cash.”

Shifting back to the current day, Olson added, “Over the last couple of months, we’ve been dramatically improving the core engine by adding Rust into the core to add memory safety. We’ve been changing the internal algorithm for how the cluster mode works to improve reliability and improve the failover times. We’re also dramatically changing how the internal data structures work since they were based on 10-year-old pieces of software, so they can better take advantage of modern hardware.”

In addition, the developer team has rebuilt the key-value store from scratch to take better advantage of modern hardware based on the work done at Google on the so-called Swiss Tables. Olson continued, “In just a few short weeks, we’ll release these improvements as part of Valkey 8.1 just one year after the project’s anniversary.” This new release includes up to 20% memory efficiency improvements, the most common bottleneck within caching systems, and state-of-the-art data structures.

Looking ahead, Valkey plans to introduce more multithreaded performance improvements, a highly scalable clustering system, and new core changes to data types. Does that sound good to you? The project remains open to new contributors and invites interested parties to join via GitHub

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.







Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: A Platform Engineering Journey: Copy and Paste Deployments to Full GitOps

MMS Founder
MMS Jemma Hussein Allen

Article originally posted on InfoQ. Visit InfoQ

Transcript

Allen: I’ll be talking about the platform engineering journey, from copy and paste deployments to full GitOps. The goal of this session is to share some lessons I’ve learned during my career and hopefully give you some solutions to the challenges you’re already facing or may face in the future. The key learnings I want you to take away are, technology moves quickly and it can be really difficult and time-consuming to keep up with the latest innovations. I’ll share some practical strategies that will hopefully help make things easier. Secondly, automation will always save you time in the long run, even if it takes a bit more time in the short term. Thirdly, planning short and long-term responsibilities for a project as early as possible can save everyone a lot of headaches. Finally, a psychologically safe working environment benefits everyone.

Professional Journey

I’ve always loved technology. My first home computer was an Amstrad, one of those great big things that takes up the whole desk. We didn’t have home internet connection at the time. Games were on floppy disks that were actually still floppy and took around 10 minutes to load, if you were lucky. When I was a bit older, technology started to advance quickly. The Y2K bug took over headlines. Dial-up internet became mainstream. I bought my first mobile phone, which was pretty much indestructible. Certainly, better than my one today, which breaks all the time. I followed my passion for technology. I did a degree in software engineering, and a few years into my career, a PgCert in advanced information systems. After I graduated, I started working as a web developer for a range of small media companies. I was a project manager and developer for an EU-funded automatic multilingual subtitling project, which is a pretty interesting first graduate job.

Then, as a web developer, building websites for companies like Sony, Ben & Jerry’s, Glenlivet. It involved a range of responsibilities, so Linux, Windows Server, and database administration, building HTML pages and Photoshop designs, writing both front and backend code, and some project management thrown into the mix. As a junior developer, I did say at least a few times, it works for me, so that must be a problem with something that operations manage, so I didn’t care about it. Now, of course, I know better. After a few years, around the time DevOps became popular, I started to work in larger enterprise companies as a DevOps engineer, as it was called then. Sometimes it was focused on automation and sometimes on software development. I moved into a senior engineering role and then a technical lead.

Then I moved to infrastructure architecture for a bit and then back to a tech lead again, where I am now. After two kids and many different tools, tech stacks, and projects later, I support a centralized platform and product teams by developing self-service tooling and reusable components.

A Hyper-Connected World

We’re in 2024. It’s a hyper-connected world. These days, we can provision hundreds or even thousands of cloud resources anywhere in the world, with the main barrier being cost. Today, around 66% of the world have internet access. We can contact anyone who’s connected 24 hours a day, 7 days a week. We can contact family and friends at the tap of a button. We can access work email anytime, day or night, which can be a good or a bad thing, depending on if you’re on call. We can get always instant notifications about things happening around the world and can livestream events, for example, the eclipse. We can ask a question and receive a huge range of answers based on real-time information. We really are in a hyper-connected world.

Looking Back

Let’s take a quick step back to the 1980s. In 1984, the internet had around 1,000 devices that were mainly used by universities and large companies. This was the time before home computers were commonplace, and most people had to be at a bookshop or library if they wanted information for a school project or a particular topic. To set the context, here’s a video from a Tomorrow’s World episode in 1984, demonstrating how to send an email from home.

Speaker 1: “Yes, it’s very simple, really. The telephone is connected to the telephone network with a British telecom plug, and I simply remove the telephone jack from the telecom socket and plug it into this box here, the modem. I then take another wire from the modem and plug it in where the telephone was. I can then switch on the modem, and we’re ready to go. The computer is asking me if I want to log on, and it’s now telling me to phone up the main Prestel computer, which I’ll now do”.

Speaker 2: “It is a very simple connection to make”.

Speaker 1: “Extremely simple. I can actually leave the modem plugged in once it’s done that, without affecting the telephone. I’m now waiting for the computer to answer me”.

Allen: I’m certainly glad it’s a lot easier to send an email nowadays. By 1992, the internet had 1 million devices, and now in 2024, there are over 17 billion.

Technology Evolves Quickly

That leads into the first key learning, technology moves quickly. We all lead busy lives, and we use technology both inside and outside of work. In a personal context, technology can make our day-to-day lives a lot easier. For example, we saw a huge rise in video conferencing for personal calls during the COVID lockdowns. In a work context, we need to know what the latest advancements are and whether they’re going to be relevant and useful to us and our employer. Of course, there are key touch points like here at QCon, where you can hear some great talks about the latest innovations and discuss solutions to everyday problems. There are more technology-specific ones like AWS Summit, Google IO, and one of the many Microsoft events. Then there are lots of good blogs and other great online content. At a company level, you’ve got general day-to-day knowledge sharing with colleagues and hackathons, which can be really valuable.

With so much information, how can we quickly and effectively find the details we need to do our jobs? One tool that I found is a really useful discussion point at work are Tech Radars. They can really help with keeping up with the popularity of different technologies. Tech Radars is an idea first put forward by Darren Smith at Thoughtworks. They’re essentially a state-stage review of techniques, tools, platforms, frameworks, and languages. There are four rings. Adopt, technology that should be adopted because they can provide significant benefits. Trial, technology that should be tried out to assess their potential. Assess, technology that needs evaluation before use. Hold, technology that should be avoided or decommissioned. Thoughtworks release updated Tech Radars twice a year, giving an overview of the latest technologies, including details, and whether any existing technologies have moved to a different ring.

I found these can also be really useful at a company or department level to help developers choose the best tools for new development projects. I’ve personally seen them work quite well in companies as they help to keep everyone moving in the same technical direction and using similar tooling. Let’s face it, no one wants to implement a new service only to find that it uses a tool that’s due to be decommissioned. It will cost the business money funding the migration, and all the work to integrate the legacy tool will be lost. Luckily, creating a basic technical piece of concept for the Tech Radar is quite easy. There are some existing tools you can use as a starting point.

For example, Zalando have an open-source GitHub repository you can use to get up and running. Or if you use a developer portal like Backstage, there’s a Tech Radar plugin for that as well. One of the key benefits about Tech Radar is it’s codified, unlike a static diagram, which can get old quite quickly. Things like automated processes, user contributions, and suggestions for new items can be easily integrated. Any approval mechanisms that are needed can also be added quite easily.

Let’s take a general example. You’re in a big company, and you’re selecting an infrastructure as code tool for a new project to provision some AWS cloud infrastructure. Your department normally uses Terraform. Terraform’s probably a choice. There are plenty of examples in the company, some InnerSource modules. You know you can write the code in a few days. Looking at the other options, Chef is good, but it’s not really used very much in the departments at the moment. As the company tends to move towards more cloud-agnostic technology, they don’t really use CloudFormation. Lately, you’ve heard that some teams have been trying out Pulumi, and some have tried out the Terraform CDK. You look at the company Tech Radar, and see that Pulumi is under trial, and Terraform is under assess.

As a project has tight timelines, you know you need a tool that’s well integrated with company tooling. While it might be worth confirming that Pulumi is still in the assess stage, after that, you probably want to focus any remaining investigation time on checking if there’s any benefit to trialing the Terraform CDK. If you can’t find any benefit to that, then it probably goes with standard Terraform, because it’s the easiest option. You know it integrates with everything already, even if it’s not particularly innovative. Of course, if the project didn’t have those time constraints, then you can spend more time investigating whether there’s actually any benefit to using the newer tooling, the Terraform CDK or Pulumi, and then putting a business case forward to use those.

Another strategy that I found to be quite useful in adopting new tools within an organization is InnerSource. It’s something that’s been gaining popularity recently. It’s a term coined by Timothy O’Reilly, the founder of O’Reilly Media. InnerSource is a concept of using open-source software development practices within an organization to improve software development, collaboration, and communication. More practically, InnerSource helps to share reusable components and development effort within a company. This is something that can be really well suited to larger enterprise organizations or those with multiple departments. What are the benefits of InnerSource? You don’t need to start from scratch. You can use existing components that have already been developed internally for the company, which means less work for you, which is always a win. InnerSource components can also be really useful if a company has specific logic.

For example, if all resources of a certain type need to be tagged with specific labels for reporting purposes. It’s an easy way of making sure all resources are compliant, and changes can be applied in a single place and then propagated to all areas that use the code. If you find suitable components that meet, for example, 80% of requirements, then you can spend your development time building the extra 20% functionality, and then contributing that back to the main component for other people to use in the future and also for yourself to use in the future. What are the challenges? If pull requests take a long time to be merged back to the main InnerSource code, it can then mean that you have multiple copies of the original InnerSource code in your repo or in the branch. You then need to go back and update your code to point to the main InnerSource branch once your PR has been merged.

In reality, I’ve seen that InnerSource code that’s been copied across can end up staying around for quite a long time because people forget to go back and repoint to the main InnerSource repo. Also making sure that InnerSource projects have active maintainers, that can help solve the issue. One other problem is not having shared alignment on the architecture of components. Should the new functionality be added to the existing components, or is it best to create a whole new component for it? Having alignment on things like these can make the whole process a lot easier.

Automation

Moving on to another key learning. Automation will almost always save you time and effort in the long run, even if it takes a bit more effort initially. Running through the three key terms, there’s continuous integration, the regular merging of code changes into a code repository, triggering automated builds and tests. Continuous delivery, the automated release of code that’s passed the build and test stages of the CI step. Then continuous deployment, the automated deployment of release code to a production environment with no manual intervention. Going back to the topic of copy and paste deployments. Here’s an example of a deployment pipeline in somewhere I worked a few years ago. Developers, me included, had a desktop which had file shares for each of the different environments, so dev, test, and prod, which links the servers in the different environments. Changes are tracked in a normal project management tool, think something like Jira.

Code was committed to a local checkout subversion, and unit tests were run locally, but there was no status check to make sure it had actually been run. Deployments involved copying code from the local desktop into the directory, and then it would go up to the server. As I’m sure you can see, there are a lot of downsides to this deployment method. Sometimes there are problems with the file share, and not all files were copied across at the same time, meaning the environment was out of sync. Sometimes, because of human error, not all of the files were copied across to the new environment. If tests were run locally and there were no status checks, then changes could be deployed that hadn’t been fully tested.

Another issue that we had quite a few times was that code changes needed to align with database changes. I know it’s still a problem nowadays, but especially with this method, the running code could be incompatible with the database schema, so trying to update both at the same time didn’t work, and you ended up with failed requests from the user.

Given all these downsides, and as automation was starting to become popular, we decided to move to a more automated development approach. There are many CI/CD tools available: GitLab, GitHub Actions, GoCD. In this case, we used Jenkins. Even after automation, we didn’t have full continuous deployment, as production deployment still required a manual click to trigger the Jenkins pipeline. I’ve seen that quite a lot in larger companies and in services that need high availability because they still need that human in the mix to trigger the deployment.

The main benefit of using automation, as is probably quite clear, deploying code from version control instead of a developer’s local checkout reduced mistakes. As the full deployment process was triggered from a centralized place, Jenkins, at the click of a button, any local desktop inconsistencies were removed completely, which made everyone’s life a lot easier. Traceability of the deployments as well was a lot easier, as obviously you can see in Jenkins everything that’s happened, that it’s easier to identify the root cause of any issues. Obviously, in case of Jenkins, and also any other tool, there are quite a few integrations that you can use to integrate with your other tooling.

Moving on to another automated approach, let’s look at GitOps for infrastructure automation. This was first coined by Alexis Richardson, the Weaveworks CEO. GitOps is a set of principles for operating and managing software systems. The four principles are, the desired system state must be defined declaratively. The desired system state must be stored in a way that’s immutable and versioned. The desired system state is automatically pulled from source without any manual intervention.

Then, finally, continuous reconciliation. The system state is continuously monitored and reconciled to whatever’s stated in the code. CI/CD is important for application deployments, of course, but it’s something that can be overlooked, as it tends to change less often than the application code, or the underlying infrastructure deployments, which can be pretty easy to automate. It can be quite common, obviously, for legacy systems to actually not have that infrastructure automation in place anyway. There are lots of infrastructure as code tools already, generic ones like Terraform, Pulumi, Puppet, Ansible, or vendor-specific ones like Azure Resource Manager, Google Cloud Resource Manager, and AWS CloudFormation.

A tool that I’ve had some experience with is Terraform. I’m going to run through a very basic workflow to show you how easy it is to set up. It implements the GitOps strategy for provisioning infrastructure as code using Terraform Cloud to provision the AWS infrastructure from GitHub source code. I’ll show you the diagram. It’ll make it clearer. It’s something that could be used for new infrastructure or could be applied to existing infrastructure that isn’t already managed by code. Let’s go through an overview of the setup. It’s separated into two parts. The green components show the identity provider authentication setup between GitHub and Terraform Cloud, and AWS and Terraform Cloud. The purple components rely on the green components to provision the resources defined in Terraform in the GitHub repository.

I’ll show you some of the Terraform that can be used to configure the setup as well as screenshots from the console just to make it clearer. Let’s start by setting up the connection between Terraform Cloud and GitHub. This can be done using this Terraform resource because there is a Terraform Cloud provider as there is with many other tools as well. Unfortunately, this resource does require a GitHub personal access token, but it is still possible to automate, which is the point of this one. Here are the screenshots of the setup. This sets up the VCS connection between GitHub and Terraform Cloud. As you can see, there are multiple different options, GitHub, GitLab, Bitbucket, Azure, DevOps. This is the permission step to authorize the connection between Terraform Cloud and your GitHub repository. This can be set at a repository level or at a whole organizational level.

In terms of least privilege, it’s probably best to set it at the repository level if you can. This is an example of a Terraform Cloud workspace configuration, and then the link to the GitHub repo using the tfe_workspace resource. There are lots of other configuration options, and as always with infrastructure as code, it’s very powerful and easy to scale. You can create multiple workspaces all with the same configuration. This is a repository selection step to link to the new workspace. Only the repositories that you’ve authorized will appear. Once you’ve done that, it’s on to configuration. You’ve got the auto-apply settings. If you want to do fully continuous deployment, you can configure this to auto-apply whenever there’s a successful run. Then, what about the run triggers? You can get it to trigger a run whenever any changes are pushed to any file in the repo, or constrain it slightly, so restrict it to particular file paths or particular Git tags.

It’s so flexible that there are hundreds of different options that you can use, but, yes, the point here is that it’s easy to set up and configure to your use case. Then the PR configuration. Ticking this option will automatically trigger a Terraform plan every time a PR is created. Then the link to the Terraform plan is also in the GitHub PR, so it’s fully integrated. That’s the Terraform Cloud and GitHub connection setup. Moving on to Terraform Cloud and AWS connection. First, the identity provider authentication. This is the Terraform to get the Terraform Cloud TLS certificate and then set up the connection. This is a screenshot as well. Then you’ve got the IdP setup with the IAM role, allowing it to assume that role. That’s in here. Then you’ve got the Terraform Cloud AWS authentication saying what permissions Terraform Cloud has.

In here, it’s S3 anything, so it’s probably quite overly permissive, but it’s just showing the example. You can configure this to any AWS IAM policy that you like. That’s the screenshot. Now the AWS part has been configured. You just add the environment variables to Terraform Cloud to tell it which AWS role to assume and which authentication mode to use. That’s it.

To recap, the green components that have just been set up are the authentication between Terraform Cloud and GitHub and the auth between Terraform Cloud and AWS. This only needs to be done once for each combination of Terraform Cloud workspace, GitHub repository, and AWS account configuration. As we saw, to achieve full automation, it can also be done using Terraform, which means it’s scalable. However, at some point during the initial bootstrap, you do need to manually create a workspace to hold the state for that initial Terraform.

Let’s move on to the purple components. These are to provision the resources defined in Terraform. This is a very basic demo repo to create an S3 bucket in an AWS account. In a real-life scenario, it would also have the tests and any linting and validation too. The provider configuration is easy, because it’s already been set up. Then define some variables and the values. Then, of course, the main Terraform S3 bucket and two configuration options, there are many. Now that connection has been set up, what would we need to do to deploy? Create a new branch, commit the code, push to GitHub and create a PR, and that will trigger the plan. You can see what it’s going to provision in AWS. Once the PR has been approved and merged to the main branch, it will then trigger another plan.

If you set auto-apply, then Terraform apply will run automatically. Now that’s been configured, scaling this to manage a lot of resources is really easy. Say you want 30 S3 buckets instead of one, all the same configuration. Write the code, create a feature branch, create a PR, and then the automatic plan will run, and then merge it into main, and there you go. You’ve got 30 buckets: easy to manage, easy to configure. Any changes in the future, all you need to do is update the code. Same with EKS cluster or any other resource.

Why automate? It makes it easy to standardize and scale, as we saw in that example. It gives good visibility and traceability over infrastructure deployments. It means multiple people can work in the same repo, and everyone can see what’s going on. If you have the proper Git branch protection strategy in place, it means there’ll be no risk of a Terraform apply running using any outdated code. There’s also the option to import existing resources that have been created manually into your Terraform state using Terraform import. Because I’ve seen, especially with some legacy apps, at the very beginning, things were just created manually, but now they need to be standardized, and certain options, for example, should it be public, should it be private, now need to be set. Importing those into the Terraform state can help you align that. Just one caveat.

In the Terraform Cloud version, there’s a paid-for version of Terraform Cloud, and in that option, it also has drift detection, which is obviously another point for the GitOps side of things, but you do have to pay for that. This example used Terraform Cloud, GitHub, and AWS. Of course, there are many other tools out there with an equally rich set of integrations. They also tend to have code examples and step-by-step guides to make things easier.

To recap, continuous delivery or continuous deployment might depend on your organizational policy or general ways of working, especially for production. It’s also worth considering the different system components and which strategy is best for each. For example, it might be best to use continuous delivery for database provisioning, but still continuous deployment for application code. Are different strategies needed in different environments? For example, continuous deployment for dev, but staging and production actually need that manual step. There are lots of tools available: Jenkins, CircleCI, GitLab, GitHub Actions, and many more. You need to choose the best one for your use case. Is one of them on your company Tech Radar, any of the others being used in other teams in your department?

Setting Clear Responsibilities

Moving on to setting clear responsibilities. I’ll tell you a story about a small operations department that supported a number of product teams. This is a true story. On Monday afternoon, a product team deployed a change. Everything looks good, so everyone goes home. 2 a.m. on Tuesday, the operations team get an on-call alert that clears after a few minutes. 4 a.m., they get another alert that doesn’t clear. Unfortunately, there’s no runbook for this service, for how to resolve the issue, and the person on-call isn’t familiar with the application. They’ve got no way to fix the problem. They have to wait until five hours later at 9 a.m., when they can contact the product team to get them to take a look. Later that day, another team deployed a change. Everything looks good, they all go home. However, this time at 11 p.m., on-call gets an alert.

This time, the service does have a runbook, but unfortunately, none of the steps in the runbook work. They request help on Slack, but no one else is online. They call people whose phone numbers they have, but there’s no answer. They need to wait until people come online in the morning, around a good few hours later, to resolve it. Not a good week for on-call, and definitely not a pattern that can be sustained long-term.

What are some reliability solutions? In the previous scenario, out-of-hours site reliability was the responsibility of the operations team, while working hours site reliability was the responsibility of the product team. The Equal Experts playbook describes a few site reliability solutions. You build it, you run it, as you’ve heard quite a lot. The product team receives the alerts and is responsible for support. The other option is Operational Enablers, which is a helpdesk that hands over issues to a cross-functional operational team, or Ops run it. An operational bridge receives the alerts, hands over to Level 2 support, who can then hand over to Level 3 if required. Equal Experts advocates that you build it, you run it for digital systems, and Ops run it for foundational services in a hybrid operating model.

Then, how about delivery solutions? In the previous scenario, the product team delivered the end-to-end solution, but they weren’t responsible for incident response. Of course, different solutions might work for different use cases. A large company might have dedicated networking, DBA and incident management teams. For a smaller company, some of these roles or teams might be combined. This is a solution from the Equal Experts playbook. You’ve got, you build it, you run it. The product team is responsible for application build, testing, deployment, and incident response, or Ops run it. The product team is responsible for application build and testing, before handing over to an operations team for change management approval and release.

Applying the you build it, you run it model to the original example, the product team would be the ones who’d be responsible for incident response, which means they might not have deployed at 4 p.m., because they didn’t want to be woken up at night. Also, if they were on-call, they could probably have solved it a lot more quickly because they know their application.

Just to recap, all services should have runbooks before reaching production. This means anyone with the required access can help support the application if needed. Runbooks should be regularly reviewed, and any changes to the application, infrastructure, or running environment should be updated in the runbook. If possible, it can be worth setting up multiple levels of support. Level 1 support can look at the runbook. If they can’t deal with it, they can then hand over to a subject matter expert who can hopefully resolve it more quickly. Monitoring, alerting should be designed during the development process. Alerts can be tested in each environment. You test it in dev, makes it a lot easier when you get to staging, and means less alerts in production.

Then, if budget allows, it can be worth using a good on-call and incident management tool. For example, PageDuty, Opsgenie, ServiceNow, Grafana. There are many of them. Because they can give you things like real-time dashboards, observability, easy on-call scheduling, and a lot of them have automated configuration. I’ll use the Terraform example. Again, a lot of them have Terraform providers or any other provider. They’re quite easy to set up and configure.

Psychological Safety

Let’s move on to psychological safety. I’m going to tell you a story. Once upon a time, there was an enthusiastic junior developer. Let’s call them Jo. He was given a task to clean up some old files from a file system. Jo wrote a script. They tested it out. Everything looked good. Jo then realized there were a ton of other similar files in other directories that could be cleaned up. They decided to go above and beyond and they refined the script to search for all directories of a certain type. They tested it locally. Everything looked good to go. They ran the script against the internal file system and all of the expected files were deleted. They checked the file system and saw there was tons of space. They gave themselves a pat on the back for a job well done.

However, suddenly other people in the office started to mention their files were missing from the file system. Jo decided to double-check their script, just in case. Jo realized that the command to find the directories on the file system to delete files was actually returning an empty string. They were basically running rm -rf /* at the root directory level and deleting everything. Luckily for Jo, they had a supportive manager and they went to tell them what happened. Their manager attempted to restore the files from the latest backup, but unfortunately that failed. The only remaining option was to stop all access to the file system and try and salvage what was left, and then restore from a previous day’s backup. As at this time, most people were using desktop computers and the internal file system for document storage, not much work was done for the rest of the day. The impact was quite small in this case, but obviously in a different scenario, it could have been a lot worse.

What happened to Jo? Jo certainly learnt a lesson. Luckily for Jo, the team were quite proactive. They ran an incident post-mortem to learn from the incident, find the root cause of the problem, and identify any solutions. What was the outcome? It was quite good. They put a plan in place to implement least privilege access and regular backup and restore testing was also put in place. Then Jo, of course, never run a destructive, untested development script in a live environment again. Least privilege access, as I’m sure most of you know, is a cybersecurity best practice where users are given the minimum privilege, if needed, to do a task. What are the main benefits? In general, forcing users to assume a privileged role can be a good reminder to be more cautious. It also protects you as an employee.

In the absence of a sandbox, you know you can test out new tools and scripts without worrying about any destruction of key resources. It also protects the business as they know that only certain people have those privileges to perform destructive tasks. Then, in larger companies, well-defined permissions is good evidence for security auditing, and it provides peace of mind for cybersecurity teams and more centralized functions.

What is psychological safety? The belief you won’t be punished for speaking up with ideas, questions, concerns, or mistakes. Amy Edmondson codified the concept in the book, “The Fearless Organization”. There have been many studies done on psychological safety. One example is Project Aristotle, which was a two-year study by Google to identify the key elements of successful teams. Psychological safety was one of the five components found in high-performing teams by that study. There are lots of workshops and toolkits that can provide proper training and give you more information.

To give an example of some questions you might see in a questionnaire, here are some of the questions from a questionnaire you can take yourself on Amy Edmondson’s website, Fearless Organization Scan. If you make a mistake on this team, it is often held against you. Members of this team can bring up problems and tough issues. People on this team sometimes reject others for being different. It is safe to take a risk on this team. It is difficult to ask other members of this team for help. Working with members of this team, my unique skills and talents are valued and utilized. Then, finally, no one on this team would deliberately act in a way that undermines my efforts.

How did working in a team with good levels of psychological safety help Jo? Jo acknowledged their involvement and shared the root cause of the problem so it could be dealt with as quickly as possible. If Jo hadn’t spoken up, it would probably have taken longer to actually find the root cause and fix it. Jo’s direct manager was approachable and acknowledged there were key learnings and improvements that could be made, and they both actively engaged in that post-mortem to find the solution. It’s always worth considering, if Jo had seen other people being punished for admitting their mistakes, would they have spoken up at all?

Recap

To recap on the key learnings, technology evolves quickly. Here are some links of things that we went through. We ran through general Tech Radars and custom Tech Radars and how they can be useful. We ran through the benefits and some pitfalls of InnerSource. Then, automation, which will save you time and effort in the long run. We touched on CI/CD and considerations for continuous delivery or continuous deployment. Should you use different strategies for different environments and deployment types? Then the advantages of GitOps. Then the demo of the GitHub, Terraform, and AWS setup.

Then, setting clear responsibilities. We ran through the Equal Experts delivery and site reliability solutions. How all services should have runbooks before they go to production. How it’s important to design and implement monitoring and alerting during the development process. Then, finally, how working in a psychologically safe working environment benefits everyone. We ran through some of the questions from that psychological safety questionnaire, and also how blameless incident post-mortems can be helpful in maintaining psychological safety by helping everyone learn from the incidents, finding the root cause of the problem, and also identifying the solutions.

Questions and Answers

Participant: With the move from the copy-paste bit to the GitOps bit, how long did that take? How did you manage that transition?

Allen: That did take probably a few months to a year. We started with development and then moving to production. That was probably the biggest step. Obviously, we had other work priorities as well. It was trying to balance it in between those. Finally, once it was done, obviously, everyone realized the benefits, but it was a slightly painful process.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.