Top 10 Big Data Certifications in 2025 – Analytics Insight

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Big data plays a crucial role in various industries. For professionals looking for career growth, can benefit from certifications that validate expertise. Here are the top 10 big data certifications in 2025.

1. IBM Data Science Professional Certificate

This certificate aids in understanding the basics of data science, machine learning, and data visualization without any previous exposure to data or knowledge.

2. Cloudera CDP Certification Program

Cloudera offers certification to users of the Cloudera Data Platform (CDP). The tests assess both general and administrator mastery.

3. Certified Analytics Professional (CAP)

This certification covers analytics problem framing, model building and data handling. Great for those wanting to polish their expertise in analytics.

4. SAS Certified Data Scientist

SAS offers a comprehensive program that covers machine learning, AI and data curation. It also requires passing multiple exams.

5. Data Science Council of America (DASCA) Certifications

The Data Science Council of America (DASCA) presents different certifications applicable to engineers, analysts and scientists in big data domains. The certification credentials exist at both beginning and advanced stages of professional career levels.

6. MongoDB Professional Certification

MongoDB delivers professional exam programs that validate developers and database administrators who work exclusively with NoSQL database technology. The certifications serve as proof of competence in managing big data projects.

7. Dell EMC Data Scientist Certifications

Dell EMC offers certifications covering data science, big data analytics and data engineering. Practical skills related to advanced analytics form the core content focus of these exams.

8. Microsoft Azure Data Scientist Associate

The certification evaluates skilled individuals who operate Azure Machine Learning as well as Databricks platforms. The certification serves employees who use cloud-based data solutions in their work.

9. Open Certified Data Scientist

The certification program has unique requirements distinct from standard test exams. The assessment requires both documentation of abilities from candidates through written submissions as well as feedback from their peers.

10. Columbia University Data Science Certificate

Columbia University delivers non-degree training for data science core knowledge acquisition. It includes coursework in machine learning and data analysis.

Conclusion

Big data certifications help professionals validate their expertise. They also open doors to better job opportunities. Choosing the right certification depends on career goals and industry requirements.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GitHub Leverages AI for More Accurate Code Secret Scanning

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

GitHub has launched an AI-powered secret scanning feature within Copilot, integrated into GitHub Secret Protection, that leverages context analysis to improve the detection of leaked passwords in code significantly. This new approach addresses the shortcomings of traditional regular expression-based methods, which often miss varied password structures and generate numerous false positives.

According to a GitHub blog post detailing the development, the system now analyzes the usage and location of potential secrets to reduce irrelevant alerts and provide more accurate notifications critical to repository security. Sorin Moga, a senior software engineer at Sensis, commented on LinkedIn that this marks a new era in platform security, where AI not only assists in development, but also safeguards code integrity.

A key challenge identified during the private preview of GitHub’s AI-powered secret scanning was its struggle with unconventional file types and structures, highlighting the limitations of relying solely on the large language model’s (LLM) initial training data. GitHub’s initial approach involved “few-shot prompting” with GPT-3.5-Turbo, where the model was provided with examples to guide detection.

To address these early challenges, GitHub significantly enhanced its offline evaluation framework by incorporating feedback from private preview participants to diversify test cases and leveraging the GitHub Code Security team’s evaluation processes to build a more robust data collection pipeline. They even used GPT-4 to generate new test cases based on learnings from existing secret scanning alerts in open-source repositories. This improved evaluation allowed for better measurement of precision (reducing false positives) and recall (reducing false negatives).

GitHub experimented with various techniques to improve detection quality, including trying different LLM models (like GPT-4 as a confirming scanner), repeated prompting (“voting”), and diverse prompting strategies. Ultimately, they collaborated with Microsoft, adopting their MetaReflection technique, a form of offline reinforcement learning that blends Chain of Thought (CoT) and few-shot prompting to enhance precision.

As stated in the GitHub blog post:

We ultimately ended up using a combination of all these techniques and moved Copilot secret scanning into public preview, opening it widely to all GitHub Secret Protection customers.

To further validate these improvements and gain confidence for general availability, GitHub implemented a “mirror testing” framework. This involved testing prompt and filtering changes on a subset of repositories from the public preview. By rescanning these repositories with the latest improvements, GitHub could assess the impact on real alert volumes and false positive resolutions without affecting users.

This testing revealed a significant drop in both detections and false positives, with minimal impact on finding actual passwords, including a 94% reduction in false positives in some cases. The blog post concludes that:

This before-and-after comparison indicated that all the different changes we made during private and public preview led to increased precision without sacrificing recall, and that we were ready to provide a reliable and efficient detection mechanism to all GitHub Secret Protection customers.

The lessons learned during this development include prioritizing accuracy, using diverse test cases based on user feedback, managing resources effectively, and fostering collaboration. These learnings are also being applied to Copilot Autofix. Since the general availability launch, Copilot secret scanning has been part of security configurations, allowing users to manage which repositories are scanned.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


End of the Road for FaunaDB: Is the Future Open Source?

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

The team behind the distributed serverless database Fauna has recently announced plans to shut down the service by the end of May. While the managed database will be terminated soon and all customers will have to migrate to other platforms, Fauna is committing to releasing an open source version of the core database technology alongside the existing drivers and CLI tooling.

Started in 2011 as FaunaDB by the team that scaled Twitter by building its in-house databases and systems, Fauna tried for many years to combine the power of a relational database with the flexibility of JSON documents. Fauna was designed to scale horizontally within a data center to maximize throughput while easily spanning globally distributed sites, ensuring reliability and local performance. With the vision of enabling “applications without database limits” and claiming use by more than 80,000 development teams, the service has now reached the end of the road.

According to the Fauna Service End of Life FAQ, the Fauna service will be turned off on May 30th, and all Fauna accounts will be deleted. The team writes:

Driving broad based adoption of a new operational database that runs as a service globally is very capital intensive. In the current market environment, our board and investors have determined that it is not possible to raise the capital needed to achieve that goal independently.

Yan Cui, AWS Serverless Hero and serverless expert, writes:

Sad to see Fauna go. They were one of the first truly serverless databases on the market.

Ankur Raina, senior staff sales engineer at Cockroach Labs, summarizes:

The DB market is brutal (…) Getting large customers on Serverless databases is hard. (…) Fauna was trying to build the document model of MongoDB, consistency & geo distribution of CockroachDB but without any ability to run it beyond two cloud providers.

The sunsetting of a once-popular database has sparked many reactions within the community. In a popular Hacker News thread, Pier Bover, founder of Waveki, writes:

A decade ago it seemed that edge computing, serverless, and distributed data was the future. Fauna made a lot of sense in that vision. But in these years since, experimenting with edge stuff, I’ve learned that most data doesn’t really need to be distributed. You don’t need such a sophisticated solution to cache a subset of data for reads in a CDN or some KV. What I’m saying is that, probably, Cloudflare Workers KV and similar services killed Fauna.

User strobe adds:

I found Fauna very interesting from a technical perspective many years ago, but even then, the idea of a fully proprietary cloud database with no reasonable migration options seemed pretty crazy at the time. (…) Hope that something useful will be open sourced as a result.

Peter Zaitsev, open source advocate, questions instead:

While there is no alternative history, I wonder what would have happened if Fauna had chosen to start as Open Source, become 100x more widely adopted but monetize a smaller portion of their customers due to “competing” Open Source alternatives.

The market of distributed databases that once competed with Fauna includes Google Spanner, PlanetScale, CockroachDB, and TiDB, among others. A migration guide is now available, offering the option to create snapshot exports, with the exported data stored as JSON files in an AWS S3 bucket. For smaller collections, data can also be exported using FQL queries.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Tessell’s Multi-Cloud DBaaS is Now Available on Google Cloud

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Tessell, the leading next-generation, multi-cloud database-as-a-service (DBaaS), is announcing that the Tessell DBaaS is now available in the Google Cloud Marketplace, accompanied by support for Oracle, PostgreSQL, SQL Server, MySQL, MongoDB, and Milvus on all four major cloud platforms—Azure, AWS, Google Cloud, and OCI. With this launch, Tessell empowers enterprises to modernize their transactional applications, database estates, and data architectures all within Google Cloud’s infrastructure. 

This announcement taps into the recent collaboration between Oracle and Google Cloud, which brought support for Oracle databases on Google Cloud infrastructure. Building off of this opportunity for innovation in cloud-based data management, Tessell delivers a fully managed solution for streamlining the complexities of managing multiple data ecosystems at once, according to the company. 

“Tessell’s support for Oracle, PostgreSQL, SQL Server, MySQL, MongoDB, and Milvus on Google Cloud empowers enterprises to capitalize on the newly available opportunity to bring application workloads to Google Cloud GCP,” said Bala Kuchibhotla, co-founder and CEO at Tessell. “Tessell has already seen rapid adoption of its fully managed database service on Google Cloud, with customers successfully running mission-critical workloads. Organizations are leveraging the platform to simplify operations, improve scalability, and accelerate cloud adoption without the complexities traditionally associated with database management. As more enterprises recognize the benefits of this streamlined approach, Tessell looks forward to expanding its footprint and supporting even more businesses in their cloud transformation journey.”

Tessell’s fully managed service offers the following advantages:

  • Automated maintenance, including for patching, backup, and recovery, which helps reduce downtime and improve reliability
  • High availability and disaster recovery with built-in multi-zone availability and cross-region recovery to ensure business continuity 
  • Data security and compliance with adaptable backup options and strong recovery mechanisms that adhere to strict compliance and regulatory policies
  • Enterprise-grade flexibility by enabling the automation and security benefits of PaaS with the customization features of IaaS
  • Unified security and compliance posture, allowing enterprises to extend their existing security and compliance services to Google Cloud while bringing their own keys 

“Bringing Tessell DBaaS to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the managed database service on Google Cloud’s trusted, global infrastructure,” said Dai Vu, managing director, marketplace and ISV GTM programs at Google Cloud. “Tessell can now securely scale and support customers on their digital transformation journeys.”

“Tessell’s deep database expertise, customer-first approach, and solution-focused mindset made our cloud migration seamless,” said Martti Kontula, head of OT and data at Landis+Gyr. “Their ability to optimize and manage database workloads on Google Cloud ensured a smooth transition. The Tessell platform delivers a powerful, intuitive experience, providing full visibility into database health and performance at a glance. For any enterprise seeking to run databases efficiently in the cloud, Tessell is the ideal choice.”

To learn more about Tessell, please visit https://www.tessell.com/.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Valkey 8.1’s Performance Gains Disrupt In-Memory Databases – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Databases / Open Source“><meta name="x-tns-authors" content="“>


Valkey 8.1’s Performance Gains Disrupt In-Memory Databases – The New Stack


<!– –>

As a JavaScript developer, what non-React tools do you use most often?

Angular

0%

Astro

0%

Svelte

0%

Vue.js

0%

Other

0%

I only use React

0%

I don’t use JavaScript

0%

2025-03-25 06:06:04

Valkey 8.1’s Performance Gains Disrupt In-Memory Databases


Databases

/


Open Source

Redis fork Valkey, with a new multithreading architecture, delivers a threefold improvement in speed and memory efficiency gains.


Mar 25th, 2025 6:06am by


Featued image for: Valkey 8.1’s Performance Gains Disrupt In-Memory Databases

NAPA, Calif — A year ago, Redis announced that it was dumping the open source BSD 3-clause license for its Redis in-memory key-value database and was moving it to a “source-available” Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1).

That move went over like a lead brick with many Redis developers and users. So the disgruntled developers forked a new project, Valkey, as “an open source alternative to the Redis in-memory NoSQL data store.” Now it’s become clear that this has become a remarkably successful fork. 

How successful? According to a Percona research paper, “75% of surveyed Redis users are considering migration due to recent licensing changes. … Of those considering migration, more than 75% are testing, considering, or have adopted Valkey.” Perhaps a more telling point is that third-party Redis developer companies, such as Redisson, are supporting both Redis and Valkey.

Multithreading and Scalability

It’s not just the licensing changes that make Valkey attractive, though. At the Linux Foundation Member SummitMadelyn Olson, a principal software engineer at Amazon Web Services (AWS) and Valkey project maintainer said in her keynote speech that Valkey is far faster thanks to incorporating enhanced multithreading and scalability features. 

That, Olson added, was not the original plan. “We wanted to keep the open source spirit of the Redis project alive, but we also wanted the value to be more than just a fork. We organized a contributor summit in Seattle where we got together developers and users to try to figure out what this new project should look like. At the time, I was really expecting us to just focus on caching, the main workload that Redis open source was serving. What we heard from our users is that they wanted so much more. They wanted Valkey to be a high-performance database for all sorts of distributed workloads. And so although that would add a lot of complexity to the project, the new core team sort of took on that mantle, and we tried to build that for our community.”

They were successful. By August of 2024, Dirk Hohndel, a Linux kernel developer and long-time open source leader, said the Valkey 8.0 redesign of Redis’s single-threaded event loop threading model with a more sophisticated multithreaded approach to I/O operations had given him “roughly a threefold improvement in performance, and I stream a lot of data, 60 million data points a day.” In addition, with Valkey 8, he saw about a “20% reduction in the size of separate cache tables. When you’re talking about terabytes and more on Amazon Web Services, that’s a real savings in size and cash.”

Shifting back to the current day, Olson added, “Over the last couple of months, we’ve been dramatically improving the core engine by adding Rust into the core to add memory safety. We’ve been changing the internal algorithm for how the cluster mode works to improve reliability and improve the failover times. We’re also dramatically changing how the internal data structures work since they were based on 10-year-old pieces of software, so they can better take advantage of modern hardware.”

In addition, the developer team has rebuilt the key-value store from scratch to take better advantage of modern hardware based on the work done at Google on the so-called Swiss Tables. Olson continued, “In just a few short weeks, we’ll release these improvements as part of Valkey 8.1 just one year after the project’s anniversary.” This new release includes up to 20% memory efficiency improvements, the most common bottleneck within caching systems, and state-of-the-art data structures.

Looking ahead, Valkey plans to introduce more multithreaded performance improvements, a highly scalable clustering system, and new core changes to data types. Does that sound good to you? The project remains open to new contributors and invites interested parties to join via GitHub

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.







Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: A Platform Engineering Journey: Copy and Paste Deployments to Full GitOps

MMS Founder
MMS Jemma Hussein Allen

Article originally posted on InfoQ. Visit InfoQ

Transcript

Allen: I’ll be talking about the platform engineering journey, from copy and paste deployments to full GitOps. The goal of this session is to share some lessons I’ve learned during my career and hopefully give you some solutions to the challenges you’re already facing or may face in the future. The key learnings I want you to take away are, technology moves quickly and it can be really difficult and time-consuming to keep up with the latest innovations. I’ll share some practical strategies that will hopefully help make things easier. Secondly, automation will always save you time in the long run, even if it takes a bit more time in the short term. Thirdly, planning short and long-term responsibilities for a project as early as possible can save everyone a lot of headaches. Finally, a psychologically safe working environment benefits everyone.

Professional Journey

I’ve always loved technology. My first home computer was an Amstrad, one of those great big things that takes up the whole desk. We didn’t have home internet connection at the time. Games were on floppy disks that were actually still floppy and took around 10 minutes to load, if you were lucky. When I was a bit older, technology started to advance quickly. The Y2K bug took over headlines. Dial-up internet became mainstream. I bought my first mobile phone, which was pretty much indestructible. Certainly, better than my one today, which breaks all the time. I followed my passion for technology. I did a degree in software engineering, and a few years into my career, a PgCert in advanced information systems. After I graduated, I started working as a web developer for a range of small media companies. I was a project manager and developer for an EU-funded automatic multilingual subtitling project, which is a pretty interesting first graduate job.

Then, as a web developer, building websites for companies like Sony, Ben & Jerry’s, Glenlivet. It involved a range of responsibilities, so Linux, Windows Server, and database administration, building HTML pages and Photoshop designs, writing both front and backend code, and some project management thrown into the mix. As a junior developer, I did say at least a few times, it works for me, so that must be a problem with something that operations manage, so I didn’t care about it. Now, of course, I know better. After a few years, around the time DevOps became popular, I started to work in larger enterprise companies as a DevOps engineer, as it was called then. Sometimes it was focused on automation and sometimes on software development. I moved into a senior engineering role and then a technical lead.

Then I moved to infrastructure architecture for a bit and then back to a tech lead again, where I am now. After two kids and many different tools, tech stacks, and projects later, I support a centralized platform and product teams by developing self-service tooling and reusable components.

A Hyper-Connected World

We’re in 2024. It’s a hyper-connected world. These days, we can provision hundreds or even thousands of cloud resources anywhere in the world, with the main barrier being cost. Today, around 66% of the world have internet access. We can contact anyone who’s connected 24 hours a day, 7 days a week. We can contact family and friends at the tap of a button. We can access work email anytime, day or night, which can be a good or a bad thing, depending on if you’re on call. We can get always instant notifications about things happening around the world and can livestream events, for example, the eclipse. We can ask a question and receive a huge range of answers based on real-time information. We really are in a hyper-connected world.

Looking Back

Let’s take a quick step back to the 1980s. In 1984, the internet had around 1,000 devices that were mainly used by universities and large companies. This was the time before home computers were commonplace, and most people had to be at a bookshop or library if they wanted information for a school project or a particular topic. To set the context, here’s a video from a Tomorrow’s World episode in 1984, demonstrating how to send an email from home.

Speaker 1: “Yes, it’s very simple, really. The telephone is connected to the telephone network with a British telecom plug, and I simply remove the telephone jack from the telecom socket and plug it into this box here, the modem. I then take another wire from the modem and plug it in where the telephone was. I can then switch on the modem, and we’re ready to go. The computer is asking me if I want to log on, and it’s now telling me to phone up the main Prestel computer, which I’ll now do”.

Speaker 2: “It is a very simple connection to make”.

Speaker 1: “Extremely simple. I can actually leave the modem plugged in once it’s done that, without affecting the telephone. I’m now waiting for the computer to answer me”.

Allen: I’m certainly glad it’s a lot easier to send an email nowadays. By 1992, the internet had 1 million devices, and now in 2024, there are over 17 billion.

Technology Evolves Quickly

That leads into the first key learning, technology moves quickly. We all lead busy lives, and we use technology both inside and outside of work. In a personal context, technology can make our day-to-day lives a lot easier. For example, we saw a huge rise in video conferencing for personal calls during the COVID lockdowns. In a work context, we need to know what the latest advancements are and whether they’re going to be relevant and useful to us and our employer. Of course, there are key touch points like here at QCon, where you can hear some great talks about the latest innovations and discuss solutions to everyday problems. There are more technology-specific ones like AWS Summit, Google IO, and one of the many Microsoft events. Then there are lots of good blogs and other great online content. At a company level, you’ve got general day-to-day knowledge sharing with colleagues and hackathons, which can be really valuable.

With so much information, how can we quickly and effectively find the details we need to do our jobs? One tool that I found is a really useful discussion point at work are Tech Radars. They can really help with keeping up with the popularity of different technologies. Tech Radars is an idea first put forward by Darren Smith at Thoughtworks. They’re essentially a state-stage review of techniques, tools, platforms, frameworks, and languages. There are four rings. Adopt, technology that should be adopted because they can provide significant benefits. Trial, technology that should be tried out to assess their potential. Assess, technology that needs evaluation before use. Hold, technology that should be avoided or decommissioned. Thoughtworks release updated Tech Radars twice a year, giving an overview of the latest technologies, including details, and whether any existing technologies have moved to a different ring.

I found these can also be really useful at a company or department level to help developers choose the best tools for new development projects. I’ve personally seen them work quite well in companies as they help to keep everyone moving in the same technical direction and using similar tooling. Let’s face it, no one wants to implement a new service only to find that it uses a tool that’s due to be decommissioned. It will cost the business money funding the migration, and all the work to integrate the legacy tool will be lost. Luckily, creating a basic technical piece of concept for the Tech Radar is quite easy. There are some existing tools you can use as a starting point.

For example, Zalando have an open-source GitHub repository you can use to get up and running. Or if you use a developer portal like Backstage, there’s a Tech Radar plugin for that as well. One of the key benefits about Tech Radar is it’s codified, unlike a static diagram, which can get old quite quickly. Things like automated processes, user contributions, and suggestions for new items can be easily integrated. Any approval mechanisms that are needed can also be added quite easily.

Let’s take a general example. You’re in a big company, and you’re selecting an infrastructure as code tool for a new project to provision some AWS cloud infrastructure. Your department normally uses Terraform. Terraform’s probably a choice. There are plenty of examples in the company, some InnerSource modules. You know you can write the code in a few days. Looking at the other options, Chef is good, but it’s not really used very much in the departments at the moment. As the company tends to move towards more cloud-agnostic technology, they don’t really use CloudFormation. Lately, you’ve heard that some teams have been trying out Pulumi, and some have tried out the Terraform CDK. You look at the company Tech Radar, and see that Pulumi is under trial, and Terraform is under assess.

As a project has tight timelines, you know you need a tool that’s well integrated with company tooling. While it might be worth confirming that Pulumi is still in the assess stage, after that, you probably want to focus any remaining investigation time on checking if there’s any benefit to trialing the Terraform CDK. If you can’t find any benefit to that, then it probably goes with standard Terraform, because it’s the easiest option. You know it integrates with everything already, even if it’s not particularly innovative. Of course, if the project didn’t have those time constraints, then you can spend more time investigating whether there’s actually any benefit to using the newer tooling, the Terraform CDK or Pulumi, and then putting a business case forward to use those.

Another strategy that I found to be quite useful in adopting new tools within an organization is InnerSource. It’s something that’s been gaining popularity recently. It’s a term coined by Timothy O’Reilly, the founder of O’Reilly Media. InnerSource is a concept of using open-source software development practices within an organization to improve software development, collaboration, and communication. More practically, InnerSource helps to share reusable components and development effort within a company. This is something that can be really well suited to larger enterprise organizations or those with multiple departments. What are the benefits of InnerSource? You don’t need to start from scratch. You can use existing components that have already been developed internally for the company, which means less work for you, which is always a win. InnerSource components can also be really useful if a company has specific logic.

For example, if all resources of a certain type need to be tagged with specific labels for reporting purposes. It’s an easy way of making sure all resources are compliant, and changes can be applied in a single place and then propagated to all areas that use the code. If you find suitable components that meet, for example, 80% of requirements, then you can spend your development time building the extra 20% functionality, and then contributing that back to the main component for other people to use in the future and also for yourself to use in the future. What are the challenges? If pull requests take a long time to be merged back to the main InnerSource code, it can then mean that you have multiple copies of the original InnerSource code in your repo or in the branch. You then need to go back and update your code to point to the main InnerSource branch once your PR has been merged.

In reality, I’ve seen that InnerSource code that’s been copied across can end up staying around for quite a long time because people forget to go back and repoint to the main InnerSource repo. Also making sure that InnerSource projects have active maintainers, that can help solve the issue. One other problem is not having shared alignment on the architecture of components. Should the new functionality be added to the existing components, or is it best to create a whole new component for it? Having alignment on things like these can make the whole process a lot easier.

Automation

Moving on to another key learning. Automation will almost always save you time and effort in the long run, even if it takes a bit more effort initially. Running through the three key terms, there’s continuous integration, the regular merging of code changes into a code repository, triggering automated builds and tests. Continuous delivery, the automated release of code that’s passed the build and test stages of the CI step. Then continuous deployment, the automated deployment of release code to a production environment with no manual intervention. Going back to the topic of copy and paste deployments. Here’s an example of a deployment pipeline in somewhere I worked a few years ago. Developers, me included, had a desktop which had file shares for each of the different environments, so dev, test, and prod, which links the servers in the different environments. Changes are tracked in a normal project management tool, think something like Jira.

Code was committed to a local checkout subversion, and unit tests were run locally, but there was no status check to make sure it had actually been run. Deployments involved copying code from the local desktop into the directory, and then it would go up to the server. As I’m sure you can see, there are a lot of downsides to this deployment method. Sometimes there are problems with the file share, and not all files were copied across at the same time, meaning the environment was out of sync. Sometimes, because of human error, not all of the files were copied across to the new environment. If tests were run locally and there were no status checks, then changes could be deployed that hadn’t been fully tested.

Another issue that we had quite a few times was that code changes needed to align with database changes. I know it’s still a problem nowadays, but especially with this method, the running code could be incompatible with the database schema, so trying to update both at the same time didn’t work, and you ended up with failed requests from the user.

Given all these downsides, and as automation was starting to become popular, we decided to move to a more automated development approach. There are many CI/CD tools available: GitLab, GitHub Actions, GoCD. In this case, we used Jenkins. Even after automation, we didn’t have full continuous deployment, as production deployment still required a manual click to trigger the Jenkins pipeline. I’ve seen that quite a lot in larger companies and in services that need high availability because they still need that human in the mix to trigger the deployment.

The main benefit of using automation, as is probably quite clear, deploying code from version control instead of a developer’s local checkout reduced mistakes. As the full deployment process was triggered from a centralized place, Jenkins, at the click of a button, any local desktop inconsistencies were removed completely, which made everyone’s life a lot easier. Traceability of the deployments as well was a lot easier, as obviously you can see in Jenkins everything that’s happened, that it’s easier to identify the root cause of any issues. Obviously, in case of Jenkins, and also any other tool, there are quite a few integrations that you can use to integrate with your other tooling.

Moving on to another automated approach, let’s look at GitOps for infrastructure automation. This was first coined by Alexis Richardson, the Weaveworks CEO. GitOps is a set of principles for operating and managing software systems. The four principles are, the desired system state must be defined declaratively. The desired system state must be stored in a way that’s immutable and versioned. The desired system state is automatically pulled from source without any manual intervention.

Then, finally, continuous reconciliation. The system state is continuously monitored and reconciled to whatever’s stated in the code. CI/CD is important for application deployments, of course, but it’s something that can be overlooked, as it tends to change less often than the application code, or the underlying infrastructure deployments, which can be pretty easy to automate. It can be quite common, obviously, for legacy systems to actually not have that infrastructure automation in place anyway. There are lots of infrastructure as code tools already, generic ones like Terraform, Pulumi, Puppet, Ansible, or vendor-specific ones like Azure Resource Manager, Google Cloud Resource Manager, and AWS CloudFormation.

A tool that I’ve had some experience with is Terraform. I’m going to run through a very basic workflow to show you how easy it is to set up. It implements the GitOps strategy for provisioning infrastructure as code using Terraform Cloud to provision the AWS infrastructure from GitHub source code. I’ll show you the diagram. It’ll make it clearer. It’s something that could be used for new infrastructure or could be applied to existing infrastructure that isn’t already managed by code. Let’s go through an overview of the setup. It’s separated into two parts. The green components show the identity provider authentication setup between GitHub and Terraform Cloud, and AWS and Terraform Cloud. The purple components rely on the green components to provision the resources defined in Terraform in the GitHub repository.

I’ll show you some of the Terraform that can be used to configure the setup as well as screenshots from the console just to make it clearer. Let’s start by setting up the connection between Terraform Cloud and GitHub. This can be done using this Terraform resource because there is a Terraform Cloud provider as there is with many other tools as well. Unfortunately, this resource does require a GitHub personal access token, but it is still possible to automate, which is the point of this one. Here are the screenshots of the setup. This sets up the VCS connection between GitHub and Terraform Cloud. As you can see, there are multiple different options, GitHub, GitLab, Bitbucket, Azure, DevOps. This is the permission step to authorize the connection between Terraform Cloud and your GitHub repository. This can be set at a repository level or at a whole organizational level.

In terms of least privilege, it’s probably best to set it at the repository level if you can. This is an example of a Terraform Cloud workspace configuration, and then the link to the GitHub repo using the tfe_workspace resource. There are lots of other configuration options, and as always with infrastructure as code, it’s very powerful and easy to scale. You can create multiple workspaces all with the same configuration. This is a repository selection step to link to the new workspace. Only the repositories that you’ve authorized will appear. Once you’ve done that, it’s on to configuration. You’ve got the auto-apply settings. If you want to do fully continuous deployment, you can configure this to auto-apply whenever there’s a successful run. Then, what about the run triggers? You can get it to trigger a run whenever any changes are pushed to any file in the repo, or constrain it slightly, so restrict it to particular file paths or particular Git tags.

It’s so flexible that there are hundreds of different options that you can use, but, yes, the point here is that it’s easy to set up and configure to your use case. Then the PR configuration. Ticking this option will automatically trigger a Terraform plan every time a PR is created. Then the link to the Terraform plan is also in the GitHub PR, so it’s fully integrated. That’s the Terraform Cloud and GitHub connection setup. Moving on to Terraform Cloud and AWS connection. First, the identity provider authentication. This is the Terraform to get the Terraform Cloud TLS certificate and then set up the connection. This is a screenshot as well. Then you’ve got the IdP setup with the IAM role, allowing it to assume that role. That’s in here. Then you’ve got the Terraform Cloud AWS authentication saying what permissions Terraform Cloud has.

In here, it’s S3 anything, so it’s probably quite overly permissive, but it’s just showing the example. You can configure this to any AWS IAM policy that you like. That’s the screenshot. Now the AWS part has been configured. You just add the environment variables to Terraform Cloud to tell it which AWS role to assume and which authentication mode to use. That’s it.

To recap, the green components that have just been set up are the authentication between Terraform Cloud and GitHub and the auth between Terraform Cloud and AWS. This only needs to be done once for each combination of Terraform Cloud workspace, GitHub repository, and AWS account configuration. As we saw, to achieve full automation, it can also be done using Terraform, which means it’s scalable. However, at some point during the initial bootstrap, you do need to manually create a workspace to hold the state for that initial Terraform.

Let’s move on to the purple components. These are to provision the resources defined in Terraform. This is a very basic demo repo to create an S3 bucket in an AWS account. In a real-life scenario, it would also have the tests and any linting and validation too. The provider configuration is easy, because it’s already been set up. Then define some variables and the values. Then, of course, the main Terraform S3 bucket and two configuration options, there are many. Now that connection has been set up, what would we need to do to deploy? Create a new branch, commit the code, push to GitHub and create a PR, and that will trigger the plan. You can see what it’s going to provision in AWS. Once the PR has been approved and merged to the main branch, it will then trigger another plan.

If you set auto-apply, then Terraform apply will run automatically. Now that’s been configured, scaling this to manage a lot of resources is really easy. Say you want 30 S3 buckets instead of one, all the same configuration. Write the code, create a feature branch, create a PR, and then the automatic plan will run, and then merge it into main, and there you go. You’ve got 30 buckets: easy to manage, easy to configure. Any changes in the future, all you need to do is update the code. Same with EKS cluster or any other resource.

Why automate? It makes it easy to standardize and scale, as we saw in that example. It gives good visibility and traceability over infrastructure deployments. It means multiple people can work in the same repo, and everyone can see what’s going on. If you have the proper Git branch protection strategy in place, it means there’ll be no risk of a Terraform apply running using any outdated code. There’s also the option to import existing resources that have been created manually into your Terraform state using Terraform import. Because I’ve seen, especially with some legacy apps, at the very beginning, things were just created manually, but now they need to be standardized, and certain options, for example, should it be public, should it be private, now need to be set. Importing those into the Terraform state can help you align that. Just one caveat.

In the Terraform Cloud version, there’s a paid-for version of Terraform Cloud, and in that option, it also has drift detection, which is obviously another point for the GitOps side of things, but you do have to pay for that. This example used Terraform Cloud, GitHub, and AWS. Of course, there are many other tools out there with an equally rich set of integrations. They also tend to have code examples and step-by-step guides to make things easier.

To recap, continuous delivery or continuous deployment might depend on your organizational policy or general ways of working, especially for production. It’s also worth considering the different system components and which strategy is best for each. For example, it might be best to use continuous delivery for database provisioning, but still continuous deployment for application code. Are different strategies needed in different environments? For example, continuous deployment for dev, but staging and production actually need that manual step. There are lots of tools available: Jenkins, CircleCI, GitLab, GitHub Actions, and many more. You need to choose the best one for your use case. Is one of them on your company Tech Radar, any of the others being used in other teams in your department?

Setting Clear Responsibilities

Moving on to setting clear responsibilities. I’ll tell you a story about a small operations department that supported a number of product teams. This is a true story. On Monday afternoon, a product team deployed a change. Everything looks good, so everyone goes home. 2 a.m. on Tuesday, the operations team get an on-call alert that clears after a few minutes. 4 a.m., they get another alert that doesn’t clear. Unfortunately, there’s no runbook for this service, for how to resolve the issue, and the person on-call isn’t familiar with the application. They’ve got no way to fix the problem. They have to wait until five hours later at 9 a.m., when they can contact the product team to get them to take a look. Later that day, another team deployed a change. Everything looks good, they all go home. However, this time at 11 p.m., on-call gets an alert.

This time, the service does have a runbook, but unfortunately, none of the steps in the runbook work. They request help on Slack, but no one else is online. They call people whose phone numbers they have, but there’s no answer. They need to wait until people come online in the morning, around a good few hours later, to resolve it. Not a good week for on-call, and definitely not a pattern that can be sustained long-term.

What are some reliability solutions? In the previous scenario, out-of-hours site reliability was the responsibility of the operations team, while working hours site reliability was the responsibility of the product team. The Equal Experts playbook describes a few site reliability solutions. You build it, you run it, as you’ve heard quite a lot. The product team receives the alerts and is responsible for support. The other option is Operational Enablers, which is a helpdesk that hands over issues to a cross-functional operational team, or Ops run it. An operational bridge receives the alerts, hands over to Level 2 support, who can then hand over to Level 3 if required. Equal Experts advocates that you build it, you run it for digital systems, and Ops run it for foundational services in a hybrid operating model.

Then, how about delivery solutions? In the previous scenario, the product team delivered the end-to-end solution, but they weren’t responsible for incident response. Of course, different solutions might work for different use cases. A large company might have dedicated networking, DBA and incident management teams. For a smaller company, some of these roles or teams might be combined. This is a solution from the Equal Experts playbook. You’ve got, you build it, you run it. The product team is responsible for application build, testing, deployment, and incident response, or Ops run it. The product team is responsible for application build and testing, before handing over to an operations team for change management approval and release.

Applying the you build it, you run it model to the original example, the product team would be the ones who’d be responsible for incident response, which means they might not have deployed at 4 p.m., because they didn’t want to be woken up at night. Also, if they were on-call, they could probably have solved it a lot more quickly because they know their application.

Just to recap, all services should have runbooks before reaching production. This means anyone with the required access can help support the application if needed. Runbooks should be regularly reviewed, and any changes to the application, infrastructure, or running environment should be updated in the runbook. If possible, it can be worth setting up multiple levels of support. Level 1 support can look at the runbook. If they can’t deal with it, they can then hand over to a subject matter expert who can hopefully resolve it more quickly. Monitoring, alerting should be designed during the development process. Alerts can be tested in each environment. You test it in dev, makes it a lot easier when you get to staging, and means less alerts in production.

Then, if budget allows, it can be worth using a good on-call and incident management tool. For example, PageDuty, Opsgenie, ServiceNow, Grafana. There are many of them. Because they can give you things like real-time dashboards, observability, easy on-call scheduling, and a lot of them have automated configuration. I’ll use the Terraform example. Again, a lot of them have Terraform providers or any other provider. They’re quite easy to set up and configure.

Psychological Safety

Let’s move on to psychological safety. I’m going to tell you a story. Once upon a time, there was an enthusiastic junior developer. Let’s call them Jo. He was given a task to clean up some old files from a file system. Jo wrote a script. They tested it out. Everything looked good. Jo then realized there were a ton of other similar files in other directories that could be cleaned up. They decided to go above and beyond and they refined the script to search for all directories of a certain type. They tested it locally. Everything looked good to go. They ran the script against the internal file system and all of the expected files were deleted. They checked the file system and saw there was tons of space. They gave themselves a pat on the back for a job well done.

However, suddenly other people in the office started to mention their files were missing from the file system. Jo decided to double-check their script, just in case. Jo realized that the command to find the directories on the file system to delete files was actually returning an empty string. They were basically running rm -rf /* at the root directory level and deleting everything. Luckily for Jo, they had a supportive manager and they went to tell them what happened. Their manager attempted to restore the files from the latest backup, but unfortunately that failed. The only remaining option was to stop all access to the file system and try and salvage what was left, and then restore from a previous day’s backup. As at this time, most people were using desktop computers and the internal file system for document storage, not much work was done for the rest of the day. The impact was quite small in this case, but obviously in a different scenario, it could have been a lot worse.

What happened to Jo? Jo certainly learnt a lesson. Luckily for Jo, the team were quite proactive. They ran an incident post-mortem to learn from the incident, find the root cause of the problem, and identify any solutions. What was the outcome? It was quite good. They put a plan in place to implement least privilege access and regular backup and restore testing was also put in place. Then Jo, of course, never run a destructive, untested development script in a live environment again. Least privilege access, as I’m sure most of you know, is a cybersecurity best practice where users are given the minimum privilege, if needed, to do a task. What are the main benefits? In general, forcing users to assume a privileged role can be a good reminder to be more cautious. It also protects you as an employee.

In the absence of a sandbox, you know you can test out new tools and scripts without worrying about any destruction of key resources. It also protects the business as they know that only certain people have those privileges to perform destructive tasks. Then, in larger companies, well-defined permissions is good evidence for security auditing, and it provides peace of mind for cybersecurity teams and more centralized functions.

What is psychological safety? The belief you won’t be punished for speaking up with ideas, questions, concerns, or mistakes. Amy Edmondson codified the concept in the book, “The Fearless Organization”. There have been many studies done on psychological safety. One example is Project Aristotle, which was a two-year study by Google to identify the key elements of successful teams. Psychological safety was one of the five components found in high-performing teams by that study. There are lots of workshops and toolkits that can provide proper training and give you more information.

To give an example of some questions you might see in a questionnaire, here are some of the questions from a questionnaire you can take yourself on Amy Edmondson’s website, Fearless Organization Scan. If you make a mistake on this team, it is often held against you. Members of this team can bring up problems and tough issues. People on this team sometimes reject others for being different. It is safe to take a risk on this team. It is difficult to ask other members of this team for help. Working with members of this team, my unique skills and talents are valued and utilized. Then, finally, no one on this team would deliberately act in a way that undermines my efforts.

How did working in a team with good levels of psychological safety help Jo? Jo acknowledged their involvement and shared the root cause of the problem so it could be dealt with as quickly as possible. If Jo hadn’t spoken up, it would probably have taken longer to actually find the root cause and fix it. Jo’s direct manager was approachable and acknowledged there were key learnings and improvements that could be made, and they both actively engaged in that post-mortem to find the solution. It’s always worth considering, if Jo had seen other people being punished for admitting their mistakes, would they have spoken up at all?

Recap

To recap on the key learnings, technology evolves quickly. Here are some links of things that we went through. We ran through general Tech Radars and custom Tech Radars and how they can be useful. We ran through the benefits and some pitfalls of InnerSource. Then, automation, which will save you time and effort in the long run. We touched on CI/CD and considerations for continuous delivery or continuous deployment. Should you use different strategies for different environments and deployment types? Then the advantages of GitOps. Then the demo of the GitHub, Terraform, and AWS setup.

Then, setting clear responsibilities. We ran through the Equal Experts delivery and site reliability solutions. How all services should have runbooks before they go to production. How it’s important to design and implement monitoring and alerting during the development process. Then, finally, how working in a psychologically safe working environment benefits everyone. We ran through some of the questions from that psychological safety questionnaire, and also how blameless incident post-mortems can be helpful in maintaining psychological safety by helping everyone learn from the incidents, finding the root cause of the problem, and also identifying the solutions.

Questions and Answers

Participant: With the move from the copy-paste bit to the GitOps bit, how long did that take? How did you manage that transition?

Allen: That did take probably a few months to a year. We started with development and then moving to production. That was probably the biggest step. Obviously, we had other work priorities as well. It was trying to balance it in between those. Finally, once it was done, obviously, everyone realized the benefits, but it was a slightly painful process.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Fauna to shut down FaunaDB service in May – InfoWorld

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Fauna, the provider of the NoSQL database FaunaDB, has said that it will shut down the service by the end of May due to the unavailability of capital required to support the database service and market it.

“…after careful consideration, we have made the hard decision to sunset the Fauna service over the next several months,” the company wrote in a blog post, adding that the sunset time will be May 30 at noon Pacific time.  

The company said all Fauna enterprise customers need to move their applications and data out of Fauna by that date, adding that after the specified date, all Fauna accounts and their associated data will be permanently deleted.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


InfoQ Dev Summit Boston 2025: Real-World AI, Platform Engineering & DevEx Strategies

MMS Founder
MMS Artenisa Chatziou

Article originally posted on InfoQ. Visit InfoQ

InfoQ Dev Summit Boston 2025 (June 9-10, 2025) is where senior software developers, architects, and engineering leaders come together to tackle today’s most pressing challenges in AI adoption, platform engineering, and developer experience (DevEx). With a focus on topics that will be important over the next 18 months, this event from the team behind InfoQ and QCon delivers insights from practitioners building and scaling modern software systems so your team can execute with confidence.

“If you’re a senior developer, architect, or team lead, you know the landscape is evolving fast. So, where do you turn for actionable insights and not just buzzwords? InfoQ Dev Summit Boston on June 9-10: This event is different – no hidden product pitches, no fluff, just deep technical talks from practitioners who are building and scaling real systems. If you want to stay ahead, learn from your peers, and bring back concrete strategies to your team, this is where you need to be”.

—Eder Ignatowicz, InfoQ Dev Summit Boston 2025 Chair, senior principal software engineer & architect @RedHat.

More Than Just Talks: a High-Impact Learning & Networking Experience

At InfoQ Dev Summit Boston, the real value extends beyond the sessions. It’s about direct access to leading engineers, exchanging lessons with peers who’ve faced similar challenges, and engaging in meaningful discussions that don’t happen online. Whether it’s deep-dive speaker Q&As, spontaneous hallway conversations, or focused breakout discussions, attendees leave with practical knowledge and professional connections that drive impact long after the event.

If you’re looking for a space to go beyond surface-level discussions and engage with a community of senior engineers solving real-world software challenges, this is the summit to attend.

Seven Talks You Can’t Miss

1. Scaling AI in the Real World: Patterns That Actually WorkPhil Calçado, founder & CEO @Outropy,  with 20+ years experience in software development, previously pioneered microservices architecture @SoundCloud, and scaled systems @DigitalOcean & @SeatGeek

Calçado shares hard-won lessons from scaling an AI assistant from prototype to 10,000 users, outperforming bigger players like Salesforce and Slack AI. This talk reveals what traditional patterns still work for AI systems – and which ones break under the pressure of stateful, stochastic architectures. Walk away with architectural insights that are immediately useful for teams shipping real AI products.

2. Empathy Driven Platforms: You Build it, Let’s Run it TogetherErin Doyle, founding engineer @Quotient and instructor @Egghead | 20+ years across full stack development in web and mobile, and platform engineering

The promise of DevOps—”you build it, you run it”— was meant to streamline ownership. However, as systems grow more complex, developers are increasingly burdened with operational overhead that impacts productivity and morale. In this talk, Doyle shares how platform teams can reduce this burden by building with empathy, prioritizing psychological safety, intuitive tooling, and collaborative design. Learn practical ways to create self-service platforms that developers actually want to use while supporting their growth and autonomy.

3. Theme Systems at Scale: How to Build Highly Customizable SoftwareGuilherme Carreiro, staff engineer @Shopify, championing the evolution of Liquid, with 14+ years in software development,  previously led DMN tooling team @Red Hat

Liquid themes power thousands of Shopify storefronts with both speed and flexibility. Carreiro shares how to build secure, human-friendly DSLs, support visual editors for non-technical users, and enhance developer tooling via VS Code and language servers. If you’re building platforms for diverse users, this talk offers battle-tested lessons for scalable, customizable architecture.

4. Systems Thinking for Scaling Responsible Multi-Agent ArchitecturesNimisha Asthagiri, principal data & AI @Thoughtworks, previously chief architect @edX

As multi-agent systems become core to enterprise AI, scaling them responsibly is critical. Asthagiri shares patterns for building modular, self-regulating agent networks and introduces oversight techniques to prevent unintended behaviors. With system-wide feedback loops and governance baked into design, attendees will leave better equipped to scale AI systems ethically and efficiently.

5. Thinking Like a Detective: Solving Cloud Infrastructure MysteriesBrendan McLoughlin, frontend architect @CarGurus, former ember data maintainer, and previously open web consultant @Bocoup

When cloud services fail silently, debugging requires more than just code knowledge. McLoughlin takes a detective’s approach to cloud troubleshooting, showing how to investigate CDNs, WAFs, load balancers, and more. Attendees will gain practical techniques for mapping request paths, building better runbooks, and interpreting subtle failure clues to solve complex infrastructure issues.

6. AI-Enabled Delivery: Leveraging ChOP & LLMs for Learning and CertificationWes Reisz, technical principal @Equal Experts, ex-Thoughtworker & ex-VMWare, 16-time QCon chair, and creator/co-host of “The InfoQ Podcast”

How do you build a meaningful, AI-powered certification experience from conference content? Reisz shares how he and the InfoQ/QCon team combined Chat-Oriented Programming (ChOP), Retrieval-Augmented Generation (RAG), and expert intuition to create the InfoQ Certified Architect in Emerging Technologies program. This session explores real-world patterns for integrating LLMs with human guidance, lessons from prompt engineering, and how to design systems that go beyond automation to enhance human learning.

7. Building an Internal Developer Portal That Empowers DevelopersTravis Gosselin, Distinguished Engineer of Developer Experience @SPS Commerce, with 20+ years of experience

Internal Developer Portals promise better productivity, but successfully implementing one requires more than just tools. Gosselin walks through SPS Commerce’s approach to building a developer portal with real impact – from making a compelling business case to driving adoption and measuring outcomes. Whether starting from scratch or iterating on an existing platform, you’ll leave with actionable guidance for creating a developer portal that truly empowers your team.

Why These Topics Matter Now

Engineering teams are under pressure to scale AI responsibly, optimize platform engineering, and improve developer experience while maintaining velocity. InfoQ Dev Summit Boston focuses on execution over hype, ensuring you leave with:

  • Field-tested solutions: Talks from engineers actively solving these challenges in production environments.
  • Peer-driven insights: Direct conversations with practitioners who’ve faced the same problems you’re tackling.
  • Actionable takeaways: Practical strategies your team can apply immediately.
  • A high-value network: Meaningful connections with senior developers, architects, and technical decision-makers.

If you want to stay ahead of the curve and bring tangible insights back to your team, this is the event to attend! Early bird pricing ends April 15, 2025. Register now to save your seat.

Our next summit is InfoQ Dev Summit Munich 2025, which returns October 15-16, 2025, with more practitioner-led insights on tackling today’s critical software development challenges.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Officially Supports Rust on Azure with First SDK Beta

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has released the first beta of its official Azure SDK for Rust, enabling Rust developers to interact with Azure services. The initial release includes libraries for essential components such as Identity, Key Vault (secrets and keys), Event Hubs, and Cosmos DB.

This move signifies Microsoft’s recognition of the growing importance and adoption of the Rust programming language, both within the company and in the broader developer ecosystem. Rust is gaining popularity due to its performance, reliability, and memory safety features, making it well-suited for systems programming and high-performance applications. Its strong type system and ownership model help prevent common programming errors, leading to more secure and stable code. At the same time, its modern syntax and tooling contribute to a positive developer experience.

The beta SDK provides Rust developers with libraries designed to integrate with Rust’s package management system (cargo) and coding conventions. The included libraries, known as “crates”  in the Rust ecosystem, can be added as dependencies to Rust projects using the cargo add command.

For example, to use the Identity and Key Vault Secrets libraries, they can run the following command:

cargo add azure_identity azure_security_keyvault_secrets tokio --features tokio/full

Next, the developer can import the necessary modules from the Azure SDK crates. The code for creating a new secret client using the DefaultAzureCredential would look like this:

#[tokio::main]
async fn main() -> Result<(), Box> {
    // Create a credential using DefaultAzureCredential
    let credential = DefaultAzureCredential::new()?;
    // Initialize the SecretClient with the Key Vault URL and credential
    let client = SecretClient::new(
        "https://your-key-vault-name.vault.azure.net/,"
        credential.clone(),
        None,
    )?;

    // Additional code will go here...

    Ok(())
}

After the Azure SDK release for Rust, Microsoft´s Cosmos DN team released the Azure Cosmos DB Rust SDK, which provides an idiomatic API for performing operations on databases, containers, and items. Theo van Kraay, a product maanger for Cosmos DB at Microsoft, wrote:

With its growing ecosystem and support for WebAssembly, Rust is increasingly becoming a go-to language for performance-critical workloads, cloud services, and distributed systems like Azure Cosmos DB.

While Microsoft is now officially entering the Rust cloud SDK space with this beta release, Amazon Web Services (AWS) already offers a mature and official AWS SDK for Rust. This SDK provides a comprehensive set of crates, each corresponding to an AWS service, allowing Rust developers to build applications that interact with the vast array of AWS offerings.

Looking ahead, Microsoft plans to expand the Azure SDK for Rust by adding support for more Azure services and refining the existing beta libraries. The goal is to stabilize these libraries and provide a robust and user-friendly experience. Future improvements are expected to include buffering entire responses in the pipeline to ensure consistent policy application (like retry policies) and deserializing arrays as empty Vec in most cases to simplify code.

Lastly, developers interested in getting started with the Azure SDK for Rust can find detailed documentation, code samples, and installation instructions on the project’s GitHub repository. They can also look for new releases from the SDK.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB: Does Hope Remain for the Stock After Massive Post Earnings Fall?

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MarketBeat tracked 15 analysts who updated their price target on Mar. 6 or later. On average, they lowered their target by 23%. This indicates that, in general, Wall Street doesn’t see the move in MongoDB stock as a drastic overreaction. However, the average updated price target of $294 still shows an implied upside north of 55% versus MongoDB’s Mar. 20 closing price.

This begs the question: Is MongoDB a buy-the-dip opportunity, or is there too much moving against this stock? Additionally, do its long-term opportunities still remain?

Below are details of MongoDB’s latest earnings and a perspective on what the stock’s future holds.

For the last quarter, MongoDB posted results that actually greatly exceeded analyst estimates. Its adjusted earnings per share of $1.28 were nearly double the $0.66 per share analysts anticipated. Additionally, the company grew revenue by 20% in the quarter, way above the nearly 14% projected.

The company pulled forward significant earnings and revenue in Q4, helping it achieve its big beats. The company noted that it had to recognize over $10 million more in revenue from its Enterprise Advanced product in the quarter than it expected. This was due to accounting rules for multi-year licenses.

This helped pull forward revenue and earnings, resulting in a big beat on both. Higher-than-expected consumption revenue from its Atlas segment also contributed to this.

However, even after adjusting for this pull-forward, MongoDB still fell short of expectations on earnings growth. Wall Street expected $4.04 of adjusted EPS in total for fiscal Q4 2025 and fiscal 2026 combined. Based on MongoDB’s results and midpoint guidance, it only sees $3.81 of adjusted EPS over those five quarters. That is a miss of just under 6%.

Atlas is MongoDB’s product that it manages for customers on the cloud. Corporations manage Enterprise Advanced, also known as “Non-Atlas,” on-premises. Enterprise Advanced offers customers greater control and customization because of this.

Overall, markets have been hoping that further adoption of AI will reaccelerate the company’s annual revenue growth. This is because developers can use MongoDB’s database to build AI applications, a need that would drive demand. In fiscal 2024, revenue grew by 31%. In fiscal 2025, it grew by 19%. Now, MongoDB is forecasting growth of just 12% for fiscal 2026.

This big drop in growth has raised concerns. One factor causing this lower growth forecast is the fact that MongoDB signed many multi-year deals in fiscal 2024 and 2025. As a result, MongoDB has fewer customers with whom to renew deals in fiscal 2026.

This declining growth rate comes as MongoDB didn’t offer commentary suggesting that AI demand would have a big impact soon. The company mentioned that it expects AI-related progress to be gradual in fiscal 2026. This is because “most enterprise customers are still developing in-house skills to leverage AI effectively.”

Management added that they “expect the benefits of AI to be only modestly incremental to revenue growth in fiscal 2026.” Overall, the company sees fiscal 2026 as a “transition year” as it looks to prepare for its AI opportunity.

MongoDB Price Chart

At this point, MongoDB has a lot of negative sentiment around it. This is due to its weak guidance and management commentary that the AI opportunity may be farther away than some had hoped. Still, MongoDB remains bullish on AI, seeing it as a “once-in-a-generation” shift.

Yet, the firm did not explicitly reiterate its statement from Q3 that it believes it will capture its “fair share of successful AI applications” in the latest earnings call. Overall, MongoDB still has a solid chance to capitalize on this opportunity in the long term. However, with the negative sentiment for this stock and the general tech sector, it may be best to wait on the sidelines until more positive news regarding AI adoption arises.

Original Post

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.