Month: June 2025

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
In this blog we demonstrate how to create an offline-first application with optimistic UI using AWS Amplify, AWS AppSync, and MongoDB Atlas. Developers design offline first applications to work without requiring an active internet connection. Optimistic UI then builds on top of the offline first approach by updating the UI with expected data changes, without depending on a response from the server. This approach typically utilizes a local cache strategy.
Applications that use offline first with optimistic UI provide a number of improvements for users. These include reducing the need to implement loading screens, better performance due to faster data access, reliability of data when an application is offline, and cost efficiency. While implementing offline capabilities manually can take sizable effort, you can use tools that simplify the process.
We provide a sample to-do application that renders results of MongoDB Atlas CRUD operations immediately on the UI before the request roundtrip has completed, improving the user experience. In other words, we implement optimistic UI that makes it easy to render loading and error states, while allowing developers to rollback changes on the UI when API calls are unsuccessful. The implementation leverages TanStack Query to handle the optimistic UI updates along with AWS Amplify. The diagram in Figure 1 illustrates the interaction between the UI and the backend.
TanStack Query is an asynchronous state management library for TypeScript/JavaScript, React, Solid, Vue, Svelte, and Angular. It simplifies fetching, caching, synchronizing, and updating server state in web applications. By leveraging TanStack Query’s caching mechanisms, the app ensures data availability even without an active network connection. AWS Amplify streamlines the development process, while AWS AppSync provides a robust GraphQL API layer, and MongoDB Atlas offers a scalable database solution. This integration showcases how TanStack Query’s offline caching can be effectively utilized within a full-stack application architecture.
Figure 1. Interaction Diagram
The sample application implements a classic to-do functionality and the exact app architecture is shown in Figure 2. The stack consists of:
- MongoDB Atlas for database services.
- AWS Amplify the full-stack application framework.
- AWS AppSync for GraphQL API management.
- AWS Lambda Resolver for serverless computing.
- Amazon Cognito for user management and authentication.
Figure 2. Architecture
Deploy the Application
To deploy the app in your AWS account, follow the steps below. Once deployed you can create a user, authenticate yourself, and create to-do entries – see Figure 8.
Set up the MongoDB Atlas cluster
- Follow the link to the setup the MongoDB Atlas cluster, Database , User and Network access
- Set up the user
Clone the GitHub Repository
- Clone the sample application with the following command
git clone https://github.com/mongodb-partners/amplify-mongodb-tanstack-offline
Setup the AWS CLI credentials (optional if you need to debug your application locally)
- If you would like to test the application locally using a sandbox environment, you can setup temporary AWS credentials locally:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SESSION_TOKEN=
Deploy the Todo Application in AWS Amplify
- Open the AWS Amplify console and Select the Github Option
Figure 3. Select Github option
2. Configure the GitHub Repository
Figure 4. Configure repository permissions
3. Select the GitHub Repository and click Next
Figure 5. Select repository and branch
4. Set all other options to default and deploy
Figure 6. Deploy application
Configure the Environment Variables
Configure the Environment variables after the successful deployment
Figure 7. Configure environment variables
Open the application and test
Open the application through the URL provided and test the application.
Figure 8. Sample todo entries
MongoDB Atlas Output
Figure 9. Data in Mongo
Review the Application
Now that the application is deployed, let’s discuss what happens under the hood and what was configured for us. We utilized Amplify’s git-based workflow to host our full-stack, serverless web application with continuous deployment. Amplify supports various frameworks, including server side rendered (SSR) frameworks like Next.js and Nuxt, single page application (SPA) frameworks like React and Angular, and static site generators (SSG) like Gatsby and Hugo. In this case, we deployed a SPA React based application. We can include feature branches, custom domains, pull request previews, end-to-end testing, and redirects/rewrites. Amplify Hosting provides a Git-based workflow enables atomic deployments ensuring that updates are only applied after the entire deployment is complete.
To deploy our application we used AWS Amplify Gen 2, which is a tool designed to simplify the development and deployment of full-stack applications using TypeScript. It leverages the AWS Cloud Development Kit (CDK) to manage cloud resources, ensuring scalability and ease of use.
Before we conclude, it is important to understand our application’s updates concurrency. We implemented a simple optimistic first-come first-served conflict resolution mechanism. The MongoDB Atlas cluster persists updates in the order it receives them. In case of conflicting updates, the latest arriving update will override previous updates. This mechanism works well in applications where update conflicts are rare. It is important to evaluate how this may or may not suit your production needs, requiring more sophisticated approaches. TanStack provides capabilities for more complex mechanisms to handle various connectivity scenarios. By default, TanStack Query provides an “online” network mode, where Queries and Mutations will not be triggered unless you have network connection. If a query runs because you are online, but you go offline while the fetch is still happening, TanStack Query will also pause the retry mechanism. Paused queries will then continue to run once you re-gain network connection. In order to optimistically update the UI with new or changed values, we can also update the local cache with what we expect the response to be. This is approach works well together with TanStack’s “online” network mode, where if the application has no network connectivity, the mutations will not fire, but will be added to the queue, but our local cache can be used to update the UI. Below is a key example of how our sample application optimistically updates the UI with the expected mutation.
const createMutation = useMutation({
mutationFn: async (input: { content: string }) => {
// Use the Amplify client to make the request to AppSync
const { data } = await amplifyClient.mutations.addTodo(input);
return data;
},
// When mutate is called:
onMutate: async (newTodo) => {
// Cancel any outgoing refetches
// so they don't overwrite our optimistic update
await tanstackClient.cancelQueries({ queryKey: ["listTodo"] });
// Snapshot the previous value
const previousTodoList = tanstackClient.getQueryData(["listTodo"]);
// Optimistically update to the new value
if (previousTodoList) {
tanstackClient.setQueryData(["listTodo"], (old: Todo[]) => [
...old,
newTodo,
]);
}
// Return a context object with the snapshotted value
return { previousTodoList };
},
// If the mutation fails,
// use the context returned from onMutate to rollback
onError: (err, newTodo, context) => {
console.error("Error saving record:", err, newTodo);
if (context?.previousTodoList) {
tanstackClient.setQueryData(["listTodo"], context.previousTodoList);
}
},
// Always refetch after error or success:
onSettled: () => {
tanstackClient.invalidateQueries({ queryKey: ["listTodo"] });
},
onSuccess: () => {
tanstackClient.invalidateQueries({ queryKey: ["listTodo"] });
},
});
We welcome any PRs implementing additional conflict resolution strategies.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
The developers of ScyllaDB have announced an updated version of the managed version of its database that is aimed at meeting workloads based on demand.
ScyllaDB is an open source NoSQL database that’s compatible with Apache Cassandra. The developers of Scylla describe it as a much faster drop-in replacement for Apache Cassandra.
Scylla Cloud was already available on AWS and Google cloud as a fully managed NoSQL DBaaS that could run data-intensive applications at scale across multiple availability zones and regions. Its benefits include the extreme elasticity with the ability to scale terabytes of data ingestion and transaction processing in minutes, alongside 100% API compatibility with Apache Cassandra CQL and DynamoDB and real-time streaming data processing via native Kafka and Spark plug-ins.
This latest update, ScyllaDB X Cloud is the latest version, and the developers say it is a truly elastic database designed to support variable/unpredictable workloads. In practical terms, this release adds the ability to scale in and out almost instantly to match actual usage, hour by hour. In a blog post announcing the new version ScyllaDB’s Tzach Livyatan said:
“For example, you can scale all the way from 100K OPS to 2M OPS in just minutes, with consistent single-digit millisecond P99 latency. This means you don’t need to overprovision for the worst-case scenario or suffer latency hits while waiting for autoscaling to fully kick in. You can now safely run at 90% storage utilization, compared to the standard 70% utilization.”
The new mode uses a different cluster type, X Cloud cluster, which provides greater elasticity, higher storage utilization, and automatic scaling. X Cloud clusters are available from the ScyllaDB Cloud application and API on AWS and GCP, running on a ScyllaDB account or your company’s account with the Bring Your Own Account (BYOA) model.
The new clusters are based on ScyllaDB’s concept of tablets, smaller parts of tables. Data in Scylla tables are split into tablets, smaller logical pieces, which are dynamically balanced across the cluster using the Raft consensus protocol. Tablets are the smallest replication unit in ScyllaDB, and the developers say they provide dynamic, parallel, and near-instant scaling operations, and allow for autonomous and flexible data balancing. They also ensure that data is sharded and replicated evenly across the cluster.
ScyllaDB X Cloud uses tablets to underpin elasticity. Scaling can be triggered automatically based on storage capacity, and as capacity expands and contracts, the database will automatically optimize both node count and utilization. Users don’t even have to choose node size; ScyllaDB X Cloud’s storage-utilization target manages that.
The use of tablets also supports running at a maximum storage utilization of 90%, because tablets can move data to new nodes so much faster, meaning ScyllaDB X Cloud can defer scaling until the very last minute.
ScyllaDB X Cloud is available now.
More Information
Related Articles
ScyllaDB Optimizes Mixed Workload Latency
Scylla Adds DynamoDB-Compatible API
ScyllaDB Launches DynamoDB Migration Tool
Scylla DB Adds Materialized Views
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
Comments
or email your comment to: comments@i-programmer.info
Presentation: Evaluating and Deploying State-of-the-Art Hardware to Meet the Challenges of Modern Workloads

MMS • Rebecca Weekly
Article originally posted on InfoQ. Visit InfoQ

Transcript
Weekly: I’m Rebecca Weekly. I run infrastructure at Geico. I’ll tell you a little bit about what that actually means. I’ll introduce the concept, and ultimately how we made our hardware choices for the efforts that we’re going through. I’m going to give the five-minute soundbite for what Geico is. Geico was founded in 1936. It’s been around a very long time. The sole purpose of what we live to do is to serve our customers, to make their lives better when things go bad. How do we do that? What does that look like? Just so you have a sense, we have over 30,000 employees. We have over 400 physical sites from a network perspective.
My team is responsible for connecting all of those different sites from a networking perspective. Then we have our primary regional offices. Those are 21 offices. Those are also connected. Then we have six on-prem data centers today. We have a pretty massive cloud footprint. My team owns the hybrid cloud across that experience, from the vended compute and storage down. I do not own any aspect of the Heroku-like platform stack. I really am at the vended compute and vended storage down across the clouds. That’s where I try to focus.
Geico – Infra Footprint
In order to tell you how I got to making hardware purchases, I want to take you back through how Geico ended up with an infrastructure footprint that we have today. In 2013, Geico, like many enterprises, made the decision to go all in on the cloud. We are going to do it. We’re going to exit our on-prem data centers and we’re going to go all in on the cloud. At that time, that’s when we had our six physical data center sites and all of our regional footprint, as I mentioned before. The reason why was maybe an interesting thought process. I’ll share it a little bit. I wasn’t there. What my understanding of the decision was, was a desire to move with more agility for their developers. They were feeling very constrained by their on-prem footprint in terms of their developer efficacy.
The thought was, if we go to the cloud, it has these fantastic tools and capabilities, we’re going to do better. This does not sound like a bad idea. The challenge was, they didn’t refactor their applications as they went to the cloud. They lifted and shifted the activities that were running on an SVC style storage SAN and on an on-prem footprint with old-style blade servers and separate L2, L3 network connectivity with subdomains all over, and a network segmentation strategy that really is something to behold. They moved that to the cloud, all in, 2014.
Fast forward to 2020, at that point, they were 80% cloud-based. Nearly 10 years into the journey, got almost all the way there. Prices had gone up for serving approximately the same load by 300%. Reliability dropped by two nines because of their surface area. I love the cloud. I worked at a cloud service provider. I am a huge fan of the cloud. Please take this with all due respect. It had nothing to do with a singular cloud. It had everything to do with how many clouds were selected. Every line of business chose their cloud du jour, generally associated with the anchor that they had. If they were using EDW, they’re going to end up in IBM.
If they were using Exadata, they’re going to end up in Oracle. This is how you end up with such a large proliferation and surface area, which when you add the complexity of wanting to have singular experiences for users, not shipping your org chart, creates a lot of latency, a lot of reliability challenges, egress fees, all sorts of fun, not-planned-for outcomes, which is how you go from a fairly flat compute load over a period of time, but get a 300% increase in cost. Not ideal, and a very large reliability challenge.
As I mentioned, not one cloud, not two clouds, but eight clouds. All the clouds. Including a primary cloud, absolutely. One that had more than half of the general spend, but every other cloud as well, and PaaS services and SaaS services, and many redundant services layered on top to try and give visibility or better composability to data. You just end up layering more things on top to try to solve the root problem, which is, you’ve increased your surface area without a strategy towards how you want to compose and understand and utilize your data to serve your customers. Should we start at the customer? That’s what our job is, is to serve them.
At that time, just to give you a sense of what that footprint became, just one of our clouds was over 200,000 cores and over 30,000 instances. Of that cloud, though, our utilization was on average 12%. In fact, I had scenarios where I had databases that had more utilization when doing nightly rebuilds than in the actual operation of those databases. Again, it’s a strategy that can be well done, but this is not the footprint of how to do it well.
What Changed?
To really dig into why, what changed, and how we got on this journey to look back and look at our infrastructure as a potential for core differentiation and optimization. One was the rising cloud costs. 2.5% increase in compute load over a 10-year period. 300% increase in cost. Also, when you look at the underlying features, there was one cloud in which we were spending $50 a month for our most popular instance type, was a VM instance type. That was actually running on an Ivy Bridge processor. Does anybody remember when Ivy Bridge was launched? 2012. I’m paying $50 a month for that instance. That is not even a supported processor. The kinds of fascinating choices that was the right choice when they moved, that was a current processor type.
Once people go to a cloud, they often don’t upgrade their instances, especially VMs, which will be more disruptive to the business, to the latest and greatest types. You end up with these massively slow, massively overprovisioned, potentially, to actually serve the business. Rising cloud costs, that’s how we got there. Number two, the premise of technology and developers being unlocked didn’t pan out. Why didn’t it pan out? Lack of visibility. Lack of consistency of data. Lack of actual appetite to refactor the applications. Lifting and shifting. We did things like take an ISV that was working on-prem, migrate it to the on-prem cloud, then that ISV later created a cloud offering, but we couldn’t take advantage of it because of where we were licensed and how we had built custom features on top of that. This is the story of every enterprise.
Every enterprise, whether you started with an ISV or an open-source stack, you start to build the features you need on top of it, and then it becomes very disruptive to change your business model to their managed service offering, their anything in that transition over. Unfortunately, it became even harder to develop for actually giving new services and features because we now had to add the elements of data composition, of SLOs across different clouds, increasing egress fees. The last thing was time to market. I just hit on it.
Ultimately, as a first-party tech group, our job is to serve a business. Their job is to serve our customers. If I can’t deliver them the features they need because the data they want to use or the model they’re looking at is in this cloud or this service model and my dataset is over here, I can’t say yes. The infrastructure was truly in the way. It was 80% of cloud infrastructure. That was where it was like, what do we have to do? How do we fix this to actually move forward?
The Cost Dilemma
I gave some of these numbers and I didn’t put a left axis on it because I’m not going to get into the fundamentals of the cost. You can look over on that far side, my right, your left, and see, at our peak, cost structure when we made the decision to look at this differently. We still had our on-prem legacy footprint. That’s the dark blue on top, and our cloud costs. You can do the relative more than 2x the cost on those two because, again, we hadn’t gotten rid of the legacy footprint, we couldn’t. We still had critical services that we had not been able to evacuate despite 10 years. Now, how am I changing that footprint? The green is my new data center that came up in July, and another new one that we’re starting to bring up on. You’re seeing part of the CapEx costs, not all the CapEx costs, in 2024. The final edge of the CapEx costs in 2026.
Then the end state over here is the new green for a proposed percentage. You all know Jevons paradox is such that as you increase the efficiency of the computation, people use it more. My assumption is the 2.5% growth rate, which is absolutely accounted for in this model, is not going to be the persistent growth rate. It’s just the only one that I can model. Every model is wrong, it’s just hopefully some are informative. That was the attempt here in terms of the modeling and analysis.
The green is the net-new purchase of CapEx. The blue is the cloud. I keep hearing people say we’re repatriating. We are looking at the right data to run on-prem and the right services to keep in the cloud. You can call that what you would like. I’m not a nation state, therefore I don’t understand why it’s called repatriation. We are just trying to be logical about the footprint to serve our customers and where we can do that for the best cost and actual service model. That is in perpetuity. We will use the cloud. We will use the cloud in all sorts of places. I’ll talk about that a little bit more in the next section.
As an insurer and as many regulated industries have, we have compliance and we have audit rules that mean we have to keep data for a very long time. We store a lot. How many of you are at a cloud service provider or a SaaS or higher-level service? If you are, you probably have something like an 80/20, 70/30 split for compute to storage.
Most people in financial services are going to look at something more like a 60/40 storage to compute, because we have to store data for 10 years for this state, 7 years for that state. We have to store all of our model parameters for choices around how we priced and rated the risk of that individual. We have to be able to attest to it at any time if there’s anything that happens in any of those states for anybody we actually insure. We don’t just have auto insurance. We are the third largest auto insurer. We also have life, motor, marine, seven different lines of business that are primaries, and underwriting. Lots of different lines of business that we have to store data associated with for a very long time. That looks like a lot of us. We have a lot of storage.
Storage is very expensive in the cloud. It is very expensive to recompose when you need to at different sites. That entire cycle is the biggest cost driver of why that cloud is so expensive. I want to use the cloud where the cloud is good, where it’s going to serve my end users with low latency in all the right ways.
If I have somebody who is driving their car as a rental car in Italy, I don’t build infrastructure anywhere outside of the U.S. I need to have the right kinds of clouds for content dissemination at every endpoint where they might use their insurance or have a claim. There’s always going to be cloud usage that makes perfect sense for my business. This is not the place where we want to be guardrailed only to that, because it’s a regulated low margin business. Reducing my cost to serve is absolutely critical for being able to actually deliver good results to my end users. This is the cost dilemma that I was facing, that our leadership team was facing, that as we looked at it, we said, this is what we think we can do given the current load, given what we know about our work. That’s how we got the go ahead.
Hybrid Cloud 101
I’m going to start with, how do we make a decision? How do you actually do this at your company? Then, go and delve into the hardware, which is my nerd love of life. First, alignment on the strategy and approach. I just gave you the pitch. Trust me, it was not a four-minute pitch when we started this process. It was a lot of meetings, a lot of discussions, a lot of modeling, seven-year P&L analyses, all sorts of tradeoffs and opportunities and questions about agility.
Ultimately, when we got aligned on the strategy and approach, then it’s making sure you have the right pieces in place to actually drive a cloud migration and solution. You got to hire the right people. You got to find the right solutions for upskilling your people that want to do new things with you. That was not a small effort. There’s lots of positions within Geico tech across the board. That has been, I think, an incredible opportunity for the people coming in to live a real use case of how and why we make these decisions. That’s been a lot of fun for me personally is to build a team of awesome people who want to drive this kind of effort.
Number three, identify your anchor tenants and your anchor spend. What do I mean by that? I’ve now used the term twice, and I’m going to use it in two different ways. I’m going to use it this way for this particular conversation. Your anchor services or anchor tenants are the parts of your cloud spend you don’t want to eliminate. These are likely PaaS services that are deeply ingrained into your business. It may be a business process flow that’s deeply ingrained, like billing, that has a lot of data that you don’t necessarily want to lose. Or it might be something like an innovative experience. Earlier sessions talked about generative AI and talked about hardware selection for AI.
There are so many interesting use cases for the core business of serving our customers in their claims, whether that’s fraud detection and analysis, whether that’s interactive experiences for chatbots, for service models, where we want to take advantage of the latest and greatest models and be able to do interesting things for our developers. Those are the kinds of use cases that are tying us to various clouds. Maybe CRM services. Every business is different, but whatever they have, you need to work with your partners. The infrastructure has to serve the business. We don’t get to tell them what to do. They have to tell us what they need to do. Then we look for the opportunities to optimize the cost to serve across that footprint.
Identifying those anchor services, and then the data. How is the data going to flow? What do you need the data for? Who are the services and users, and what are they going to need? How do we keep it compliant? How do we keep it secure across the footprint? Those are much more difficult conversations. Because everyone wants their data right where they are, but massive cost savings by putting it on-prem. What needs to actually be there? How do you create the right tiering strategy with your partners? Which you start to talk about different language terms than many businesses use. They don’t know SLOs. That’s not their life. They aren’t going to be able to tell me a service level objective to achieve an outcome. They will give you a feeling or a pain point of an experience that they’ve had.
Then, I have to go figure out how to characterize that data so that I have a target of what we can’t get worse than for their current experience or where they have pain today, and so where we need to turn it to, to actually improve the situation. Modeling your data, modeling and understanding what is needed where. Then, making sure you have real alignment with your partners on the data strategy for, might be sovereignty, I don’t personally have to deal with sovereignty, but certainly, for compliance and audit, is absolutely critical. It requires a lot of time. It has a lot of business stakeholders.
It is the most often overlooked strategy whether you’re going to the cloud or coming from the cloud. It will cost you if you don’t take the time to do that correctly. It will cost you in business outcomes. It will cost you in your customer service experience. It will certainly cost your bottom line. This is the step everyone forgets. Know what they don’t want to let go of, because you will not serve them well if you take it away. Know what they need to have access to, and make sure it is the gold. It is the thing that they have access to always.
Now you got that. You’ve got a map. You’ve got a map of your dependencies. You’ve got a map of your SLOs. You’ve got a map of where you’re trying to go. Now you need to look at your technology decisions. You get to choose your hardware, your locations for your data centers, your physical security strategies, all sorts of fun things. Then, you got to figure out what you’re going to expose to your users to actually do the drain, to actually move things from one location to another. You need to really understand what you’re building, why you’re building it, and then how you’re going to move people to actually create the right scenario to actually execute on this cost savings and vision. Then you create a roadmap, and you try and actually execute to the roadmap. That is the overview of what I’m about to talk about.
1. Start with Your Developers
Me, personally, I always want to start with my developers. I always want to start with my customer. For me, infrastructure, we’re the bottom. We’re the bottom of the totem pole. Everybody is above us. I need to know my data platform needs. I need to know all my different service layer needs on top of the platform, whether it’s your AI, ML. Then, ideally, you also need to turn that into an associate experience and a business outcome.
This is a generic stack with a bunch of different elements for the data and control plane. No way that I can actually move the ephemeral load or the storage services if I haven’t exposed a frontend to my developers, particularly my new developers, that is consistent across the clouds. Start with a hybrid cloud stack. What are you offering for new developers? Stand it up tomorrow. It’s the most important thing you’re going to do in terms of enabling you to change what’s happening below the surface. We start there. We start with our developers, what we need to expose. Those are good conversations. What do they need? If you have one team that needs one particular service, they should build it. If you have a team that’s going to actually build a service that’s going to be used by 4 or 5 or 7 or 12 different applications, that’s one that belongs in your primary platform as a service layer. Kafka, messaging, that’s a good example of one that you probably are going to want to build and have generically available across clouds. That’s the way.
2. Understand your Cloud Footprint
I will actually talk about turning the cloud footprint into a physical footprint and how to think about those problems. Our most primary cloud had over 100 different instance types. This is what happens when you lift and shift a bunch of pets to the cloud. You end up with a lot of different instance types because you tried to size it correctly to what you had on-prem. The good news about the cloud is that there’s always a standard memory to compute ratio. You’re going to get 4 gigs or 8 gigs or 16 gigs or 32 gigs per vCPU. That’s the plan. That’s what you do. You have a good provisioned capacity. Note I say provisioned. Utilization is a totally different game, and how you actually measure your utilization is where you’re going to get a lot of your savings. It’s important that you understand provisioned capacity versus utilization. What did we do? We took that big footprint of 103 whatever different instance types and we turned it into a set of 3 primary SKUs.
A general-purpose SKU, a big mem SKU, and an HPC style SKU for all the data analytics and ML that is happening on our company’s footprint. That was the primary. Then we had a bunch of more specialty variants. I don’t particularly want to keep a bunch of specialty variants for a long time. Again, not where infrastructure on-prem is going to be your best cost choice in savings. For now, there are certain workloads that really did need a JBOF storage, cold storage SKU. This is a big savings opportunity for us. There were definitely reasons why we got to that set of nine.
This one, we could have a longer debate about. Your provisioned capacity of network to your instance is very hard to extract. Each cloud is a little bit different in this domain. You can definitely see how much you’re spending on your subnets. You can see the general load across. You can see the various elements of your network topology designed in your cloud. Correlating beside the provisioned capacity, which usually your instance type will tell you what you have, but actually understanding how much of that network interface you’re using in terms of your actual gigabits is very hard. Different clouds have different choices. You can do a lot of things with exporters. If you’re using any kind of OpenTelemetry exporter and you have those options, you can try.
The hardware level metrics that we who build on-prem are used to, Perfmon, everything that you can pull out of PMU counters, everything you can pull out of your GPU directly if you’re actually in that space, you do not get that. That is probably the hardest area. Got a good sense of compute, something to think about in general from a compute perspective turning cloud to on-prem, is that for cloud, you are provisioned for failover. You have to double it. Also, if your utilization is, let’s say, 8%, 12%, 14%, you can hope your users will get towards 60% in your on-prem footprint as you move them towards containers. There’s hope and there’s necessity. Nothing moves overnight. You can do some layering to assume you’re going to get better utilization because you’ll have better scheduling, because you’ll have more control.
Ultimately, you still have to leave a buffer. I personally chose to buffer at the 40% range. Everyone has a different way they play the game. It’s all a conversation of how fast you can manage your supply chain to get more capacity if you take less buffer. Double it and assume you’re going to lose 40% for failover for HA, for all the ways in which we want to make sure we have a service level objective to our end users for inevitable failures that are happening on-prem.
3. Focus On your Route to the Cloud
Let’s talk about the network. Enterprises have fascinating franken-networks. I did not know this, necessarily. Maybe I knew this as an associate working at a company in my 20s. What happens? Why did we get here? How does this happen? What used to happen before COVID, before the last 15 years of remote work, is people had to go to the office to do their work. The network was the intranet, was the physical perimeter. This is what most of these businesses were built to assume. You had to bring your laptop to the office and plug in to be able to do your service. Then, during COVID, people had to go remote. Maybe that was the only first time it happened. Maybe it happened 10 years before that because they wanted to attract talent that could move and go to different locations, or they just didn’t want to keep investing in a physical edge footprint.
Whatever the reason, most of those bolted on a fun solution three hops down to try and do what we would normally do as a proxy application user interface to the cloud. Assume you have a large edge footprint and a set of branch offices on, let’s say, MPLS, which is rather expensive, it can double your cost per gig easily. Some places you’ll probably see it 5, 6, 10 times more expensive per gig. You’re probably underprovisioned in your capacity because you paid so much for this very low jitter connectivity, which someone sold you. That’s on a cloud. I’m not counting that in my eight clouds. I just want you to know that MPLS is done through a cloud, and you actually don’t know that it’s distinct two or three hops away from where you are. Lots of failure domains because you’re probably single sourced on that.
Then, it gets backhauled to some sort of a mesh ring. You’ve got some mesh that is supporting, whether it’s your own cloud, whether it’s their cloud, there’s some mesh that is supporting your connectivity. That goes into maybe your branch office, because, remember, all your network protocol and security is by being on-prem, so you’ve got to backhaul that traffic on-prem. Then that will go out to a CNF, some sort of a colocation facility, because that’s probably where you were able to get a private network connection, which if you are regulated, you probably wanted a private network connection to your cloud provider. That’s the third hop. Now that’s maybe the fourth hop, it depends.
Then you go to a proxy layer where you actually do RBAC, identity access-based control. Does this user, does this developer, does this application have the right to access this particular cloud on this particular IP? Yes, it does. Fantastic. You get to go to that cloud or you get to have your application go out to the internet. I talk to a lot of people in my job at different companies.
Most of us have some crazy franken-network like this. This is not easy to develop on. Think about the security model you have to enforce. Think about the latency. Think about the cost. It’s just insane. This is your route to the internet. This is going through probably an underprovisioned network. Now you have to think through, where do I break it? How do I change the developer compact so that this is their network interface? Wherever they are, any of these locations, it goes to a proxy, it goes out to the network. That’s it. That makes your whole life simpler. There’s a lot of legacy applications that don’t actually have that proxy frontend, so you have to build it. You have to interface to them. Then you manage it on the backend as you flatten out this network and do the right thing. It’s probably the hardest problem in most of the enterprises, just to give you a sense of that network insanity.
4. Simplify your Network and Invest in Security at all Layers
Again, all those boxes are different appliances. Because you have trusted, untrusted, semi-trusted zones, which many people believe is the right way to do PCI. Makes no sense. In the cloud, you have no actual physical isolation of your L2 and your L3, so if you promulgated this concept into your cloud, it’s all going on L3 anyway. You’re just doing security theater and causing a lot of overhead for yourself, and not actually doing proper security, which would be that anybody who’s going across any network domain is encrypted, TLS, gRPC. You’re doing the right calls at the right level and only decrypting on the right box that should have access to that data.
That is the proper security model, regardless of credit card information, personal information. This security theater is not ok. It’s not a proper model to do anything, and it’s causing a lot of overhead for no real advantage. Nice fully routed core network topology. Doesn’t have to be fully routed. You can actually look at your provisioning, your failure rates, your domains and come up with the right strategy here. That is not the right strategy. Maybe I’ll put one more point on it. Once you look at this franken-network you have and the security model you have, regardless of the provisioned capacity that somebody is experiencing in the cloud, it’s usually going to be the latency that has hit them long before the bandwidth. There is a correlation, loaded latency to your actual bandwidth.
Fundamentally, the problem to solve is the hops. Make the best choice from a network interface card and a backbone as possible. Interestingly enough, because the hyperscalers buy more at the 100-gig plus increments, generally, if I could have gotten away with 25 gig, if I could have gotten away with lower levels, go where the sweet spot of the market is. You’re not going to change your network design for at least five to seven years. It’s just not going to happen. Better to overprovision than underprovision. Go for the best sweet spot in the market. It’s not 400 gig, but 100 gig is a pretty good spot, 25 gig might be fine for your workloads and use cases.
5. Only Buy What You Need
Only buy what you need. I already gave you my rules around where you want to have doubling of capacity from your cloud footprint to your on-prem footprint, how you want to buffer and think through your capacity in those zones. When you’re actually looking at your hardware SKUs, very important to only buy what you need. I have a personal bias on this. I’m going to own it. A lot of people who sell to the enterprise sell a lot of services. Whether that’s a leasing model. Whether that’s call me if you need anything support. Whether that’s asset management and inventory. Whether that’s management tools to give you insights or DSIM tools to give you insights. These to me don’t add value. Why don’t they add value? Supply chain has been a real beast for the last four years.
If I’m locked into every part of my service flow, running on somebody’s DSIM that only supports their vendor or doing a management portal that only supports them as a vendor, I have lost my ability to take advantage of other vendors who might have supply. When I say only buy what you need, I mean buy the hardware. Run open source. It’s actually quite excellent. They’re probably using it if they’ve been using the cloud from a developer perspective, at least at the infrastructure layers. I’m not talking about PaaS layers. Truly at the infrastructure layers, they’re probably running Linux. They’re probably running more open-source choices.
If that’s the case, I personally looked at ODM hardware. I like ODM hardware. There’s ODM. ODM is a model, you can buy from somebody who’s a traditional OEM in an ODM style. That’s basically to be able to purchase the hardware that you want, to have visibility into your firmware stack, your BIOS, your maintenance, so that you actually can deploy and upgrade if you need to in post. Which is important to me, because right now I have a massive legacy footprint, but a bunch of developers building net-new stuff. What my memory ratios are right now may not be what they want to have in the next two years and three years, or storage, or fill in your note.
Doing this work, basically, and taking a model here of 1,000 cores, 1 terabyte of memory, yes, 1 petabyte of storage, so just normalizing out, we got about 50% or 60% less. That’s with all the bundling I mentioned. Double your capacity and buffer 40%, it’s still that much cheaper than those equivalent primary SKUs for vended capacity for compute and storage. That has nothing to do with PaaS, nothing to do with the awesome things in cloud. This is very specific to my workloads and my users. Your mileage may vary.
6. Drive your Roadmap
Finally, go take that puppy for a ride. You got to go do it. It’s great to have a model. It’s great to have a plan. You have to actually start that journey. We started our journey about June of 2023. The decisions were started and made in February, the analysis began, of 2023. We started making our first contact towards actually buying new hardware, actually looking at new data center location facilities in June of 2023. Issued out our RFPs, got our first NDAs signed, started our evaluations on those.
Actually, did our physical site inspections, made sure that we understood and knew what we wanted to contract for based on latency characteristics. By basically July of this year, we had our first data center site up and built on the new variety that is actually geo distributed. That was not actually taken into account in the six data centers we had previously. We’re a failover design. Then, had our first units delivered in September. Had everything up, running, debugged, and started to serve and vend capacity and compute through the end of this month for our first site.
Then, our second site coming up next year. Those didn’t happen overnight by any means. If I were to show you the hardware path of building out a new Kubernetes flow, of actually ramping and pushing up OpenStack for our fleet management on-prem, those were happening very similar timeframes, end of last year to build up the first, to really give a consistent hybrid cloud experience for our end users to onboard them there, running on top of public clouds, but getting away from the vendor locked SDKs into true open source. Then, giving us capabilities of later migrating so that you stop the bleed as you then prepare underneath the new hardware you want to have.
Lessons Learned
I have one more, things I wish I had whispered into my ear when we started this journey. There’s no amount of underestimation in terms of, you need the right team. To go from buyers to builders, you need the right folks who can do that. Doesn’t mean you can’t teach people. Doesn’t mean you can’t work together. You need the right senior technical talent and the right leaders in the right spots who’ve done this before. You need at least two years. You need a leadership team who understands it’s going to take two years. That they have to be wanting and willing. I’ve seen too many partners on this journey, or friends on this journey say, yes, it seems like a good analysis, but six months in, nine months in, new leader, we’re out. You’re not going to succeed in anything if you don’t have the willpower to stick with it. It’s going to be at least two years. Hardware and software don’t come together, or they shouldn’t. You need to really think through your software experience, your user experience, your hybrid cloud. You can do that now.
There are so many ways in which vendors get locked. You get locked into the services and the use cases of Amazon or Microsoft, or love them all. You can start to break that juggernaut immediately. You should for any reason. Whether you’re coming on-prem, whether you’re doing a hybrid cloud strategy, whether you want to find a different cloud for a specific use case. There’s a bunch of GPU focused clouds because it’s hard to get GPUs in the clouds. Whatever your reason is, understanding what is anchor and you’re going to keep it, and taking everything else out of the proprietary stacks gives you autonomy in making decisions as a business. If you care about margins whatsoever, it’s a good choice. Detailed requirements.
If there’s anything I found on this journey that I had to say to myself over and again is, do not let perfect be the enemy of dumb. It’s all going to change anyway. That’s the point. Take the best signal you can from the data you have, make the decision you have. Document your assumptions and your dataset and what might change it, and then go. Just go. Just try and create an environment for your team that it’s ok. That you’re going to screw up, it’s ok. Because there’s no way to get it right straight out of the gate.
The best thing you can do is to talk to your customers and make sure you really understand their requirements in their language. If you don’t have those conversations, you are definitely wrong. Maybe the other thing that is interesting is, open is not so open, depending on which layer of the stack you’re looking at, depending on even if you think you’re on the managed Kubernetes service that should in theory be the same as Kubernetes, no, it’s not. They’ve built all sorts of fun little things on the backend to help you with scaling.
Breaking it, even where you think you’ve chosen a more reasonable methodology, can be hard. I would be remiss if not saying, in this journey, there’s a lot of people who have helped us in the open-source community. That has been wonderful. Whether it’s CNCF and Linux Foundation, OpenStack, OpenBMC, Open Compute Project. This community of co-travelers is awesome. Very grateful for them. We’re members of most of these organizations.
Questions and Answers
Participant: The two years, the timeframe that you said, is it per data center?
Weekly: For me, that two-year roadmap is to go from six on-prem data centers to two data centers. Again, whether you do two or three is a choice for every company. You need two, because you want to have active-active. Unless you have a truly active-passive footprint, which maybe you do. Most companies want an active-active footprint, so you need two physical sites. If you have only two physical sites, you’re going to be writing your recovery log to the cloud. That is your passive site. That is your mirror. If you would rather do that on-prem, then you would want a third site. That’s a choice. It should come down to your appetite for spending in the cloud, where and why and how you want to think through your active-active and your recovery time. Cloud’s a great recovery place. It tends to have pretty good uptime when you have one of them. We’ve had some consternation given our journey and our experience in the cloud.
Again, I think that’s very much to the user, if you want to do three sites versus two. That’s the two years for the six to the two, or three. The longest end is usually the contracting on the frontend, doing the assessment, doing the site assessment, making sure they have the right capacity. Depending on what you’re purchasing from a data center colocation facility provider, I’m a huge fan of colocation facilities, if you are less than 25 megawatts, you don’t need to be building your own data center. Colos are great. They have 24-7 monitoring. They have cameras. They have biometrics. They are fantastic. They’re all carrier neutral at this stage. If you looked 10, 12 years ago, you might have been locked into a single service provider from a network perspective. All of them have multi-carrier at this stage. It’s a fantastic way.
Colos tend to have interesting pricing in terms of retail versus commercial versus high-end users, where you are actually having a colo built for you for your primary use. Most enterprises are going to be over retail, but way under commercial use. Pricing is different there than maybe other places. All the cost models I showed are very much in that over retail, if you’re under retail, the model does not hold. If you’re over retail size, they’re going to show pretty similar economics from a site perspective. Colo facility location buildout is usually 8 to 12 weeks. If you’re using a colo provider that doesn’t have a carrier you want to use, getting network connectivity to that site can be very time consuming. Outside of that, everything else is pretty easy to do.
See more presentations with transcripts

MMS • Renato Losio
Article originally posted on InfoQ. Visit InfoQ

MariaDB has recently released MariaDB Community Server 11.8 as generally available, its yearly long-term support (LTS) release for 2025. The new release introduces integrated vector search capabilities for AI-driven and similarity search applications, enhanced JSON functionality, and temporal tables for data history and auditing.
The new Vector datatype allows for more complex data storage and retrieval, particularly useful for machine learning and data science applications where vector representations of data are common. While vector support was added in earlier releases, as previously covered on InfoQ, this is the first LTS release to allow developers to store embeddings and query them alongside traditional relational data. Kaj Arnö, CEO of the MariaDB Foundation, writes:
This is undoubtedly the most significant highlight of MariaDB 11.8 LTS: full support for MariaDB Vector(…) Vector search capabilities are crucial for RAG and other modern AI and machine learning applications, enabling similarity search on large datasets. MariaDB Vector is now fully supported in LTS form, giving you stability and predictability for years to come.
MariaDB Vector includes a native VECTOR data type with indexing for nearest-neighbor search, functions for calculating vector similarity (VEC_DISTANCE_EUCLIDEAN, VEC_DISTANCE_COSINE, and VEC_DISTANCE), and functions for converting binary vectors to their textual representation and back (VEC_FromText and VEC_ToText). Furthermore, the feature provides SIMD hardware optimizations for Intel (AVX2 and AVX512), ARM, and IBM Power10 CPUs.
The new features allow similarity searches on high-dimensional data, targeting popular use cases like semantic search, recommendation engines, and anomaly detection. Earlier this year, database expert Mark Callaghan ran benchmarks to compare MariaDB, Qdrant and Postgres (pgvector) with a large dataset. He concluded:
If you already run MariaDB or Postgres then I suggest you also use them for vector indexes (…) I have a bias. I am skeptical that you should deploy a new DBMS to support but one datatype (vectors) unless either you have no other DBMS in production or your production DBMS does not support vector indexing.
In a deep review of the release, Federico Razzoli, founder of Vettabase, highlights some of his favorite improvements, including parallel dumps, PARSEC authentication, and new SQL syntaxes, as well as what was left out, such as catalogs. On vector search, he writes:
MariaDB vectors are faster than pgvector, according to Mark Callaghan’s benchmarks. But there are some caveats here. If we only care about performance, the biggest problem is that MariaDB apparently decided to never implement stored procedures in languages other than SQL. This means that the embedding process must happen outside of MariaDB, normally in another server, even if the original data is in MariaDB. With PostgreSQL, you can do everything in Postgres itself.
Vector search is the main feature of the MariaDB release but not the only one: like other open-source relational databases, MariaDB has now moved to Unicode as the default character set to make it fully compatible with today’s multilingual and global applications, and extended the TIMESTAMP range from 2038 to 2106. Arnö writes:
Like most open source projects, we have addressed the famous Year 2038 problem. But unlike many others, MariaDB achieves this without requiring any data conversion — provided you’re not using System-Versioned Tables. This means your existing data stays intact while you gain an 80-year reprieve on timestamp overflows.
The release includes improved support for temporal tables for data history and auditing: maintaining a complete history of modifications to the data helps with Point-in-Time Recovery scenarios, compliance, and security. Ralf Gebhardt, product manager at MariaDB plc, writes:
First introduced in MariaDB 10.3 and now available with several enhancements, Temporal Tables automatically manages the history of your data and simplifies the development and maintenance of applications that require data lineage.
According to the documentation, it is possible to upgrade to MariaDB 11.8 from MariaDB 11.4 (the previous LTS) or any older release, back to MariaDB Server 10.0 or earlier, including most versions of MySQL Server. MariaDB has published a separate article on how to build AI applications using frameworks with MariaDB Vector Store.
Major cloud providers do not yet support the latest GA release on their managed services, with AWS currently supporting 11.8 only in the database preview environment.
Released under the GPLv2 license, MariaDB 11.8 is available on GitHub.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

EdTech unicorn Multiverse has appointed Donn D’Arcy as its new chief revenue officer (CRO).
London-based D’Arcy joins from MongoDB, where, as head of EMEA, he helped scale the business to $700m in ARR, representing over 30% of MongoDB’s global revenue.
His appointment as CRO is set to help scale Multiverse’s goal to build the AI adoption layer for the enterprise through transforming workforce skills.
The London-headquartered firm was founded by CEO Euan Blair, the son of former Prime Minister Tony, and has made a string of leadership appointments in recent months.
These include the hirings of MongoDB’s Jillian Gillespie as CFO and Baroness Martha Lane Fox to the board.
Several weeks later, it strengthened its senior team with the onboarding of Helen Greul as engineering vice president and Asha Haji as operations vice president.
The company’s ambition is to become a generational British tech success story by solving a critical problem – while companies are investing heavily in AI tools, they lack the workforce skills to unlock their true value.
It says that this hire, and those that preceded it, are key to making this mission possible.
Prior to his success at MongoDB, D’Arcy spent over twelve years at BMC Software, where he helped in leading BMC UK to $500m in revenue, making it the top-performing region worldwide.
He will now be tasked with using his expertise as Multiverse builds upon its partnerships with leading global companies, which include over a quarter of the FTSE 100, as well as 100 NHS trusts and more than 55 local councils.
“Truly seizing the AI opportunity requires companies to build a bridge between tech and talent – both within Multiverse and for our customers,” said Blair.
“Bringing on a world-class leader like Donn, with his incredible track record at MongoDB, is a critical step in our goal to equip every business with the workforce of tomorrow.”
D’Arcy added: “Enterprise AI adoption won’t happen without fixing the skills gap. Multiverse is the critical partner for any company serious about making AI a reality, and its focus on developing people as the most crucial component of the tech stack is what really drew me to the organisation.
“The talent density, and the pathway to hyper growth, means the next chapter here is tremendously exciting.”
Article originally posted on mongodb google news. Visit mongodb google news
Microsoft Enhances Developer Experience with DocumentDB VS Code Extension and Local Emulator

MMS • Craig Risi
Article originally posted on InfoQ. Visit InfoQ

In a move to streamline developer workflows around MongoDB‑compatible databases, Microsoft has released an open‑source DocumentDB extension for Visual Studio Code alongside DocumentDB Local, a lightweight local emulator. Designed for use with Azure Cosmos DB’s MongoDB API and standard MongoDB instances, this toolset empowers developers to manage, query, and edit document databases directly within VS Code without relying on external tools or cloud resources.
Install the extension via the VS Code Marketplace to browse collections, inspect documents, and run find() queries using an intelligent editor with syntax highlighting and autocomplete. Data can be viewed in table, tree, or JSON formats, with seamless pagination for large datasets. Developers can import and export JSON datasets, facilitating efficient prototyping and testing.
DocumentDB Local complements the extension by providing a containerized MongoDB‑compatible engine perfect for integration testing and local development. It supports the MongoDB wire protocol and behaves consistently with Azure Cosmos DB, ensuring parity between local and production environments.
This unified toolkit eliminates workflow friction by enabling local-first development while maintaining compatibility with cloud databases. Developers can switch environments effortlessly, reduce context switching, and accelerate prototype iteration. The ability to test end‑to‑end – from local container to cloud deployment – without leaving the editor significantly boosts efficiency and productivity.
This functionality is not unique to VSCode, though. MongoDB support has matured for those using JetBrains IDEs like IntelliJ IDEA or DataGrip, including MongoDB Shell integration. Developers can view and edit documents, execute shell commands, and leverage database navigation and completion features directly in their IDE.
Additionally, third‑party tools such as DBCode also bring database management into VS Code, providing a unified interface for connecting to MongoDB, querying data, and handling schema, reflecting a growing trend toward embedding database workflows in code‑centric environments.
By combining a polished VS Code experience with a lightweight local database emulator, Microsoft is looking to deliver a powerful and flexible foundation for MongoDB developers. An environment that they claim can support fast prototyping, consistent testing, and efficient migration between local and cloud environments, all from a single interface.

MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ

To grow their career, Bruno Rey suggests that software engineers should develop ambition, increase their capacity, and seek opportunities. He advises being proactive, broadening your influence by learning from peers, and stepping outside your comfort zone. Software engineers can keep a brag doc to ensure that their work is visible and plan their growth with realistic long-term goals.
Bruno Rey spoke about how software engineers can grow their careers at QCon San Francisco.
Rey mentioned three factors that drive the personal growth of software engineers: ambition, capacity, and opportunity. Ambition for him means understanding that making an extra effort to become a better version of ourselves will pay off. Capacity is the ability to perform the tasks that are expected of an employee one level above you, or at the very least, the ability to learn it quickly. But even a perfect employee can find difficulties in climbing the ladder if they don’t find a good opportunity, Rey said.
As an individual, if you’re having trouble maintaining ambition consistently, the underlying factor might be a lack of motivation, and you should evaluate why you’re going through that, Rey suggested.
Employers and mentors should look for signs of ambition during recruiting. If you have someone working at the company who is a good worker but fails to show ambition, try to make an extra effort to explain the benefits and maybe make them see examples in real life, Rey said.
Rey suggested that software engineers have to take agency to grow personally:
Some people prefer the approach of victim-player, some call it “high-agency”, others use the term “proactive”; they’re all similar. This was made very popular with the famous “7 habits…” book by Covey, with “be proactive” being the very first habit.
We all have an area of influence, Rey said. There are things we can change and things we can’t. What happens in most cases is that engineers think that their area of influence is smaller than it actually is, and move around in the small subsection of their comfort zone:
If you’re willing to take a few uncomfortable steps, you can probably start broadening your influence by a lot. You may take a few false steps as part of this process, but if done in good spirit and with judgment, any healthy work environment should forgive those.
To broaden their influence, software engineers can talk to their manager or to a superior peer, Rey suggested. See what they do and how they operate. Try to take some tasks off their plate and do them yourself. If you don’t know how to do it, train that muscle and learn, he mentioned.
Saying “teach me how to do this” sometimes sounds lazy, so it’s better to learn as much as you can on your own and then coming to them with some specific questions or for validation that your understanding is correct, Rey said. Make sure you don’t step on their toes and don’t make your work public before getting validation from them.
To plan their career growth at a sustainable pace, software engineers should develop a long term vision:
Where do you see yourself in 3 years? And in 5 years? Make sure it’s achievable. Then trace your way back and propose intermediate goals: what do you need at the end of this year in order to achieve that 5y goal? Again, make it achievable. Discuss them with your superiors.
When planning your career, understand that things won’t always go smoothly; there will be setbacks and delays outside of your control. Just like project planning, make room to accommodate for that, Rey concluded.
InfoQ interviewed Bruno Rey about how software engineers can broaden their influence and ensure that their work is recognized.
InfoQ: How can software engineers broaden their influence?
Bruno Rey: Opportunities to broaden your influence are easier to come by in smaller companies or startups where responsibilities aren’t so segmented, and sometimes everybody does everything.
Back in 2013, I was working as a developer in one such company and would normally step out of my role and do tasks that were more associated with Ops: parse logs, restart servers/processes, or gather information about bugs. I was not afraid to just do what needed to be done, even if formally it was someone else’s task. My superiors saw this as a great trait; luckily this type of behavior was encouraged at that company.
InfoQ: What can software engineers do to ensure that their work is recognized?
Rey: Nobody gets a promotion by doing a lot of invisible work. If you want to show ambition you have to be sure that you’re vocal about the work you’re doing, even if that isn’t something that comes to you naturally.
In my case I tend to favor long tenure in jobs, and try to stay in the same company for many years to have a relevant impact. Over the years I found that a brag doc was completely necessary. Especially if you stay in the same place longer than your manager, as this helps a new manager gain context quickly on how you work and in which areas you shine.
A couple of good articles on this are Get your work recognized: write a brag document by Julia Evans and Publishing your work increases your luck by Aaron Francis.
LTIMindtree Launches ‘BlueVerse’ — An AI Ecosystem that will Define the Enterprise of the Future

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

LTIMindtree [NSE: LTIM, BSE: 540005], a global technology consulting and digital solutions company, has announced the launch of a new business unit and suite of AI services and solutions: BlueVerse. Designed as a complete AI ecosystem, it helps enterprises accelerate their AI concept-to-value journey. This ecosystem is a universe of components that enterprises need to elevate business operations, achieve breakthrough productivity, and create transformational customer experiences.
BlueVerse Marketplace currently has over 300 industry and function-specific agents and ensures seamless interoperability and a growing connector ecosystem. It is underpinned by responsible AI governance, delivering enterprise-grade trust and scalability.
BlueVerse Productized Services utilize repeatable frameworks, accelerators, and industry-specific solution kits. At launch, BlueVerse will offer pre-built solutions for Marketing Services and Contact Center as a Service (CCaaS). With Marketing Services businesses can unlock unparalleled campaign effectiveness and achieve maximum ROI, transforming their marketing strategies into powerful growth engines. CCaaS uses context-aware AI agents to reduce response times leading to enhanced customer satisfaction.
This ecosystem also includes BlueVerse Foundry, an intuitive no-code designer and flexible pro-code editor that can enable enterprises to quickly compose and deploy AI agents, AI Tools, assistants, Retrieval-Augmented Generation (RAG) pipelines and intelligent business processes.
Venu Lambu, Chief Executive Officer and Managing Director, LTIMindtree said “BlueVerse is all about unlocking productivity for businesses at different levels by embedding AI across all functions of the enterprise. Backed by a strategic partnership ecosystem and deep AI expertise, it positions LTIMindtree as the partner of choice for future-ready organizations.”
“BlueVerse will enable our clients to unlock new sources of value, streamline operations, and stay ahead in an AI-driven world,” said Nachiket Deshpande, President, Global AI Services, Strategic Deals and Partnerships. “By embedding advanced AI across core business functions, we aim to deliver measurable outcomes and create long-term competitive advantage for our clients.”
BlueVerse is where autonomous agents and enterprise ambition converge. At LTIMindtree, we’re not just bringing AI to business—we’re making business Agentic. To learn more about BlueVerse please click here.
About LTIMindtree:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by 84,000+ talented and entrepreneurial professionals across more than 40 countries, LTIMindtree — a Larsen & Toubro Group company — solves the most complex business challenges and delivers transformation at scale. For more information, please visit www.ltimindtree.com.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
If you’ve spent more than five minutes in web development these past few years, you’ve likely heard the MERN stack. And if not, don’t worry, let us bring you up to speed: MERN refers to MongoDB, Express.js, React, and Node.js. It’s a complete JavaScript framework that many developers are big fans of, particularly when developing speedy, cutting-edge, and responsive web apps.
But what is all the buzz about? More to the point, what are individuals actually creating with the MERN stack in the real world?
Let’s dissect five truly practical and prevalent use cases where the MERN stack excels—not just in theory, but in real products and platforms people interact with and are most used by full-stack development services.
1. Dashboards & Single Page Applications (SPAs)
Let’s begin with something that nearly every business requires: dashboards. Be it a marketing analytics dashboard, a sales CRM, or a task management application like Trello, these interfaces are ideal for the MERN stack.
You have a smooth, responsive UI that refreshes in real-time without repeatedly reloading the page with React. The backend API is driven by Express and Node, and the data is managed by MongoDB—user accounts, saved reports, activity logs, and so forth.
Why MERN works so well here:
- React makes the frontend super interactive and fast.
- MongoDB’s flexible schema is great for storing dynamic dashboard widgets or user settings.
- You can integrate real-time features like notifications or live data updates using WebSockets.
This stack is best suited for companies creating tools people utilize every day and want them to be quick, seamless, and highly responsive.
Full-stack developers are employed on a dedicated basis by several companies to create these in-house tools from the ground up using MERN, allowing them complete control over customization and performance.
2. eCommerce Websites (Without the Bloat)
If you’ve ever used Shopify or Magento, you know that those are a bit heavyweight. That’s great for mass merchant retailing, but what is one to use when one does want something tighter or extremely proprietary?
This is where the MERN stack is a breath of fresh air.
You can create anything from a tiny online store to a marketplace website using MERN. React provides you with product pages, filters, and a responsive checkout UI.
Express and Node take care of user authentication, payment gateway, and tracking orders. MongoDB stores all your product data, user data, reviews—you name it.
What makes it better than using a template-based CMS?
- Total control over the user experience.
- Easier to integrate custom features (e.g., flash sales, product bundles, loyalty points).
- Scales with your traffic without slowing down.
In short: if you’re tired of an eCommerce site that will ghost down in memory lane, MERN lets you build one exactly how you want it. It’s also a great fit for businesses looking for long-term, scalable full-stack development services.
3. Social Platforms & Community Sites
No, we are not saying that you’ll create the next Facebook-like app overnight—but social sites don’t need to be gigantic. Be small: writer sites, coder groups, or even interest clubs for hobbies.
The MERN stack is great for user applications with posts, comments, likes, and messaging. React enables you to build silky-smooth notification UIs and newsfeeds.
Node and Express do the back-end logic—whose friends are they, who follows whom, etc. MongoDB keeps all your unstructured content: posts, images, chat logs, and user bios.
What developers like about MERN here:
- Easy to set up real-time messaging or notifications.
- MongoDB’s document structure is ideal for social features (comments, replies, threads).
- You can add gamification, badges, or user levels easily.
If you’re building a community-first platform, MERN gives you the flexibility to grow it without outgrowing your stack.
4. Learning Management Systems (LMS)
E-learning has expanded manifold, not just because of the pandemic. People are always looking to learn at their own speed, and businesses are shelling out big money to reskill employees.
That is where Learning Management Systems come into play, and the MERN stack website development is a good fit here as well.
You can create interactive course pages, dynamically behaving video players, quizzes, and rich instructor and learner dashboards using React.
Node and Express manage back-end logic concerning course availability, progress, and submission handling. MongoDB stores structured and semi-structured data of any type, such as course content, progress reports, ratings, and assignments.
Real advantages:
- Easily supports multiple user roles (student, teacher, admin).
- Integrates with video APIs (Zoom, Vimeo, etc.) and payment gateways.
- You can build discussion forums, certifications, and gamified elements without fighting a rigid system.
Bottom line: MERN helps you build an LMS that’s engaging, adaptable, and easy to maintain.
5. Real-Time Collaboration Tools
You’ve likely used Google Docs, Figma, or even Miro at some point. They’re sophisticated tools—but distilled to their essence, they’re about real-time collaboration.
That can be handled by the MERN stack too.
Assume that you are building a collaborative whiteboard application or a code-sharing application for developers. With React, the user interface will be refreshed in real-time for all the participants.
Node.js using WebSockets manages the real-time data exchange between participants. Express handles routing everything in the meantime, and MongoDB manages storing the session history or version details.
Why it works:
- Node.js is great for real-time communication with many users.
- You can store and retrieve different document versions easily with MongoDB.
- React’s component model makes the UI modular and easy to update on the fly.
These kinds of apps need speed, responsiveness, and real-time capability—MERN does all three.
Wrapping Up
The MERN stack is not a silver bullet, but it certainly hits the sweet spot for a variety of web applications. From productivity applications and marketplaces to learning software and collaboration tools, it provides a rock-solid foundation that is infused with phenomenal flexibility.
If you’re going to be mapping out a project and wondering whether MERN is appropriate, think about what you’re building. Do you need something interactive? Scalable? Customizable? If the answer is yes, MERN should be a top contender.
And even if you don’t possess expertise, don’t worry—just ensure to hire extremely-committed full-stack developers who are proficient in MERN. A professional team will not only develop your application, but they will also assist you in refining it to a level that users would be willing to use.
Author Bi0
Tina Jain
Tina Jain is a tech content writer with over 3 years of experience creating clear, engaging, and SEO-friendly content. She specializes in simplifying complex technical topics for blogs, whitepapers, and product documentation, helping tech brands connect with their audiences effectively.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Lindsey Tibbitts
Article originally posted on InfoQ. Visit InfoQ

Key Takeaways
- Architectural success in decentralized systems depends more on how decisions are made than on system design alone.
- Replacing control with trust requires visible, structured practices – such as ADRs and advice forums – to build confidence and clarity.
- Empowering teams to make architectural decisions works when they seek advice from both experts and those impacted, not permission from higher-ups.
- Lightweight governance tools like Architecture Advice Forums can improve alignment without reintroducing hierarchy.
- Decentralization works best when technical and cultural practices evolve together – supporting autonomy without sacrificing cohesion.
Introduction: Beyond the Illusion of Autonomy
Decentralized architecture is often celebrated as a technical design choice – service boundaries, team APIs, infrastructure independence. But autonomy on paper doesn’t guarantee alignment in practice.
When architecture becomes distributed, the challenge isn’t just how the system is designed – it’s how decisions get made, shared, and trusted across teams.
In my organization, that reality became clear as we grew rapidly and integrated multiple newly acquired companies.
Teams were empowered in theory, but still struggled in practice. Architects became bottlenecks. Developers either waited for permission or made decisions in isolation. Autonomy existed, but confidence didn’t.
Reading Facilitating Software Architecture by Andrew Harmel-Law gave me language – and a path – for addressing that gap.
The book offers lightweight, trust-based practices like the Architecture Advice Process, Architectural Decision Records (ADRs), and Advice Forums that help organizations build technical alignment without falling back on centralized control.
This article reflects my personal interpretation of Facilitating Software Architecture by Andrew Harmel-Law (2023), as applied in a real-world, post-acquisition engineering context.
This article shares how we’ve started applying those ideas inside a real, multi-team engineering environment. It’s not a success story – it’s a reflection on what happens when you try to shift from control to trust, from approval to advice, from isolation to visibility.
What follows is a set of lessons, tools, and cultural shifts that have helped us evolve toward a more resilient, decentralized architecture – one where autonomy is earned through shared understanding, not just granted by org charts.
The Real Problem: Decision-Making at Scale
Decentralized architecture isn’t just a matter of system design – it’s a question of how decisions get made, by whom, and under what conditions. In theory, decentralization empowers teams. In practice, it often exposes a hidden weakness: decision-making doesn’t scale easily.
We started to feel the cracks as our teams expanded quickly and our organizational landscape became more complex. As teams multiplied, architectural alignment started to suffer – not because people didn’t care, but because they didn’t know how or when to engage in architectural decision-making. Architects became bottlenecks, and developers either waited for direction or made isolated decisions that introduced long-term friction.
This problem starts earlier than many realize. Every architectural decision passes through three stages, using a simple three-stage model we found helpful:
- A decision is required – recognizing that a choice needs to be made
- The decision happens – evaluating options and selecting one
- The decision is implemented – putting the decision into practice
That first stage – recognizing the decision point – is frequently missed. Too often, only one “obvious” path is presented, and the opportunity to generate or compare alternatives is skipped entirely. This makes architecture feel opaque and disempowering, especially when options are decided behind closed doors.
Even when teams do recognize a decision point, scaling the decision process across groups adds complexity. There’s no one-size-fits-all approach. What we realized is that decentralization isn’t the goal – it’s the constraint. The real objective is to make good decisions quickly, with the right people involved. That means being intentional about:
- Who can initiate, make, and implement decisions
- How we surface and compare options
- When to involve input – and from whom
Without that structure, autonomy becomes a liability.
This insight became the turning point for us. Recognizing that our decision-making model – not our tech stack – was limiting our velocity led us to adopt new practices that supported distributed decision-making without losing coherence. The first step was replacing permission-seeking with advice-seeking, which I’ll explore next.
Trust Over Control: Redefining Governance
Most engineering organizations talk about empowerment, but their architecture processes still rely on implicit control – who’s allowed to decide, who needs to sign off, and how tightly everything is tracked. For decentralization to work, that model has to change.
One model that helped clarify this shift was the Architecture Advice Process: a decision-making model grounded not in authority, but in trust.
The process follows a single rule:
Anyone can make and take an architectural decision, as long as they:
- Seek advice from everyone meaningfully affected by the decision
- Seek advice from experts with relevant experience
This isn’t about asking for permission. It’s about seeking knowledge, exposing context, and making informed, accountable decisions – fast.
And that shift matters. We stopped asking ‘Who approves this?’ and started asking ‘Who should I talk to before we do this?’
That reframing – away from permission and toward advice – became a cultural unlock.
We realized what we really needed wasn’t more approval – it was more clarity, more consistency, and more collaboration.
This model doesn’t mean anything goes. It relies on professionalism and mutual accountability, not hierarchy. When decision-makers seek advice visibly – and advice-givers respond with thoughtful, experience-based input – teams build trust.
By emphasizing transparency over control, we created space for better conversations, clearer ownership, and faster progress.
Putting It Into Practice: The Architecture Advice Process
The shift from control to trust requires more than mindset – it needs practice. We leaned into a lightweight but powerful toolset to make decentralized decision-making work in real teams. Chief among them is the Architectural Decision Record (ADR).
ADRs are often misunderstood as documentation artifacts. But in practice, they are confidence-building tools. They bring visibility to architectural thinking, reinforce accountability, and help teams make informed, trusted decisions – without relying on central authority.
Why ADRs Matter
Decentralized teams often face three major confidence gaps:
- Confidence in ourselves and others to make sound architectural calls
- Confidence in the advice process – that feedback was sought and considered
- Confidence in understanding decisions over time, especially when context fades
A well-crafted ADR helps close all three gaps. It gives future readers a clear view into why a decision was made, what options were considered, and who was consulted – which is critical when decisions are distributed and evolve across many teams.
What an ADR Is
An ADR captures a single architectural decision and its surrounding context. Once finalized, it’s immutable (though it can be superseded). It’s written for future teammates, not just the immediate stakeholders. It should be easy to skim for relevance or dig into for detail.
Key components include:
- Title & ID – Clear and uniquely identifiable
- Status & Date – Indicates when the decision was made
- Context – Explains the problem or trigger
- Options – Lists 3–5 real, thoughtful alternatives – not just strawmen
- The decision outcome – Names the option chosen and why
- Advice Received – Captures raw advice from those consulted
Don’t exclude rejected options – “Do nothing” counts too – rejection is still a decision
This structure isn’t just for record-keeping. It reinforces the Architecture Advice Process by tying decision-making directly to the advice that informed it.
Making ADRs Work in Practice
In my own organization, architectural decisions often lived in Slack threads or hallway conversations – easy to lose, hard to revisit. We were informally applying parts of the advice process, but without structure or visibility. Introducing even a lightweight, markdown-based ADR process brought immediate clarity.
Some practices that helped us:
- Start with the problem. Brain-dump the context before listing solutions.
- Seek advice early. Especially on the framing and scope – not just the options.
- Make it visible. Share who was consulted and what was learned.
- Capture provisional decisions. These invite feedback without stalling progress.
Storing ADRs close to the work – in GitHub repos, internal wikis, or project docs – makes them accessible. We treat them as a living archive of architectural thinking, not just decisions.
More Than Documentation
What stood out most to me is that ADRs aren’t just about record-keeping – they’re about trust. They help bridge the gap between those who design systems and those who live with the consequences of decisions. They give new teammates a path into prior decisions, and help avoid repeating the same conversations.
Most importantly, ADRs reflect the culture shift behind advice-driven architecture. They show teams that decisions aren’t made in isolation or by decree, but through transparent, inclusive, and intentional processes.
Creating Shared Context with Advice Forums
Decentralized architecture works best when decisions don’t happen in isolation. Even with good individual practices – like ADRs and advice-seeking – teams still need shared spaces to build trust and context across the organization. That’s where Architecture Advice Forums come in.
We implemented a practice called the advice forums that offer a lightweight structure to support decentralized decision-making without reintroducing control. They aren’t governance checkpoints. They’re opportunities to learn, align, and advise – openly and regularly.
What an Advice Forum Is (and Isn’t)
An advice forum is:
- A recurring session for discussing in-progress architectural decisions
- A forum for advice, not approvals or consensus
- A place where teams present, experts advise, and observers learn
It’s not about gatekeeping or slowing teams down. Instead, it provides transparency and distributed visibility into architectural thinking – especially helpful in organizations with multiple teams or domains.
The Forum’s Core Purposes
Advice forums serve three main goals:
- Increase visibility into decisions already in flight
- Build trust through transparent, real-time discussion
- Create learning moments for participants and observers alike
When these conversations happen in the open, they promote psychological safety, sharpen thinking, and reduce the risk of duplicated effort. It also reinforces accountability – teams that know they’ll be sharing their work tend to think more intentionally about how they frame and justify decisions.
How It Works
Forums are structured but informal:
- Agendas are shared in advance with links to draft ADRs
- Teams briefly present the decision, context, and areas where they’re seeking input
- Advice is offered in the moment or afterward – always recorded in the ADR
- Silence is treated as intentional non-participation, not passive approval
Crucially, no decisions are made in the forum itself. The responsibility to decide remains with the team closest to the work. But the forum amplifies support, challenges blind spots, and exposes patterns worth exploring more broadly.
Why This Works
What makes this practice effective isn’t the meeting – it’s the mindset:
- It normalizes open architectural thinking, not behind-the-scenes approvals
- It enables cross-team alignment without enforcing sameness
- It fosters a community of curiosity, not control
Concepts like coalescent argumentation – where groups acknowledge shared understanding before exploring disagreements – help keep conversations productive. Teams learn not just what others are deciding, but how they’re thinking, what trade-offs they considered, and why they landed where they did.
In my own reflections, I found this practice particularly compelling. In many organizations, great technical work happens in silos. Advice forums help break those silos without imposing heavyweight processes. They create a visible on-ramp for developers and teams who want to engage with architecture but don’t yet feel empowered to do so.
Tips for Implementation
To get started, you don’t need buy-in from the whole company. Start small:
- Create a shared ADR template and repository
- Define the forum’s scope and expectations clearly – e.g., a one-page Terms of Reference
- Emphasize learning and delivery over ceremony
- Keep it optional but valuable
When done well, advice forums become a hub of architectural awareness. They help organizations evolve from disconnected teams making isolated decisions to a culture where architecture is open, shared, and continuously improved.
From Approval to Alignment: Cultural Shifts Observed
The shift to advice-seeking changed more than our process – it changed how people behaved.
Before adopting the Architecture Advice Process, our architecture function had become a bottleneck. Teams waited. Architects felt overwhelmed. Decisions often landed in our laps too late or with too little context. Everyone waited for someone else to make a decision. The more the architects centralized, the less connected they became.
We were holding too much, and holding it too tightly. And it showed, Teams felt stuck. Architects felt responsible for decisions they weren’t close enough to make well.
We replaced approval bottlenecks with a system of shared responsibility. Teams began to proactively explain their decisions, involve the right people, and build more confidence in the process. Not everything shifted overnight, but over time, these patterns emerged:
- More developers wrote ADRs, even for small or internal decisions
- Architects stopped defaulting to ownership and started focusing on support
- Conversations got more thoughtful and less political
This wasn’t just a process change. It was a behavioral reset – a shift from permission to presence, from control to coaching. And it created space for a more inclusive, transparent, and resilient architecture practice.
Conclusion: A Cohesive Culture of Autonomy
Decentralized architecture only works when it’s grounded in intentional practices. Tools like the Architecture Advice Process, ADRs, and advice forums don’t just help us move faster – they help us move together.
One of the clearest lessons from both my experience and Harmel-Law’s book is this:
“Distributed architecture is more social than technical.”
It’s not enough to distribute system responsibilities – you also have to distribute trust. That means replacing command-and-control with guidance and transparency. It means shifting the architect’s role from approval authority to trusted peer and visible collaborator.
“Architects don’t need more power. They need more presence.”
If you’re working in a growing or distributed organization, consider piloting one of these practices. Start small. Try a lightweight ADR template. Launch a casual advice forum. Shift one decision from approval to advice.
Decentralization isn’t just an architectural pattern – it’s a cultural commitment. And it starts with how we choose to decide.