How Data Contracts Support Collaboration between Data Teams

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Data contracts define the interface between data providers and consumers, specifying things like data models, quality guarantees, and ownership. According to Jochen Christ, they are essential for distributed data ownership in data mesh, ensuring data is discoverable, interoperable, and governed. Data contracts improve communication between teams and enhance the reliability and quality of data products.

Jochen Christ spoke about data contracts at OOP conference.

Data contracts are what APIs are for software systems, Christ said. They are an interface specification between a data provider and their data consumers. Data contracts specify the provided data model with the syntax, format, and semantics, but also contain data quality guarantees, service-level objectives, and terms and conditions for using the data, Christ mentioned. They also define the owner of the provided data product that is responsible if there are any questions or issues, he added.

Data mesh is an important driver for data contracts, as data mesh introduces distributed ownership of data products, Christ said. Before that, we usually had just one central team that was responsible for all data and BI activities, with no need to specify interfaces with other teams.

With a data mesh, we have multiple teams that exchange their data products over a shared infrastructure. This shift requires clear, standardized interfaces between teams to ensure data is discoverable, interoperable, and governed effectively, Christ explained:

Data contracts provide a way to formalize these interfaces, enabling teams to independently develop, maintain, and consume data products while adhering to platform-wide standards.

Christ mentioned that the main challenge teams face when exchanging data sets is to understand domain semantics. He gave some examples:

If there is a field called “order_timestamp”, is it the timestamp when the customer clicked on “buy now”, is it the payment succeeded event, or is it the order confirmation email?

Another example is enumerations, such as a “status” field, which highly depends on the implemented business process and exception-handling routines.

Data contracts are written in YAML, so they are machine-readable, Christ said. Tools like Data Contract CLI can extract syntax, format, and quality checks from the data contract, connect to the data product, and test that the data product complies with the data contract specification. When these checks are included in a CI/CD deployment pipeline or data pipeline, data engineers can ensure that their data products are valid, Christ mentioned.

Data users can rely on data contracts when consuming data from other teams, especially when data contracts are automatically tested and enforced, Christ said. This is a significant improvement compared to earlier practices, where data engineers had to manually trace the entire lineage of a field using lineage attributes to determine whether it was appropriate and trustworthy for their use case, he explained:

By formalizing and automating these guarantees, data contracts make data consumption more efficient and reliable.

Data providers benefit by gaining visibility into which consumers are accessing their data. Permissions can be automated accordingly, and when changes need to be implemented in a data product, a new version of the data contract can be introduced and communicated with the consumers, Christ said.

With data contracts, we have very high-quality metadata, Christ said. This metadata can be further leveraged to optimize governance processes or build an enterprise data marketplace, enabling better discoverability, transparency, and automated access management across the organization to make data available for more teams.

Data contracts are transforming the way data teams collaborate, Christ explained:

For example, we can use data contracts as a tool for requirements engineering. A data consumer team can propose a draft data contract specifying the information they need for a particular use case. This draft serves as a basis for discussions with the data providers about whether the information is available in the required semantics or what alternatives might be feasible.

Christ called this contract-first development. In this way, data contracts foster better communication between teams, he concluded.

InfoQ interviewed Jochen Christ about data contracts.

InfoQ: How do data contracts look?

Jochen Christ: Data contracts are usually expressed as YAML documents, similar to OpenAPI specifications.

dataContractSpecification: 1.1.0
info:
 title: Orders Latest
 owner: Checkout Team
terms:
 usage: Data can be used for AI use cases.
models:
 orders:
   type: table
   description: All webshop orders since 2020
   fields:
     order_id:
       type: text
       format: uuid
     order_total:
       description: Total amount in cents.
       type: long
       required: true
       examples:
         - 9999

InfoQ: How do data contracts support exchanging data sets between teams?

Christ: With data contracts, we have a technology-neutral way to express the semantics, and we can define data quality checks in the contract to test these guarantees and expectations.

Here is a quick example:

order_total:
 description: |
   Total amount in the smallest monetary unit (e.g., cents).
   The amount includes all discounts and shipping costs.
   The amount can be zero, but never negative.
 type: long
 required: true
 minimum: 0
 examples:
   - 9999
 classification: restricted
 quality:
   - type: sql
     description: 95% of all values are expected to be between 10 and 499 EUR.
     query: |
       SELECT quantile_cont(order_total, 0.95) AS percentile_95
       FROM orders
     mustBeBetween: [1000, 49900]

This is the metadata specification of a field “order_total” which not only defines the technical type (long), but also the business semantics that help to understand the values, e.g., it is important to understand that the amount is not in EUR, but in cents. There is a security classification defined (“restricted”), and the quality attribute defines business expectations that we can use to validate whether a dataset is valid or probably corrupt.

InfoQ: How can we use data contracts to generate code and automated tests?

Christ: In the previous “order_total” example, the data quality SQL query can be used by data quality tools (such as the Data Contract CLI) to execute data quality checks in deployment pipelines.

In the same way, the CLI can generate code, such as SQL DDL statements, language-specific data models, or HTML exports from the data model in the data contract.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike Delivers Full Support for ACID Transactions – Datanami

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

(panumas nikhomkhai/Shutterstock)

Aerospike has always been a fast database, capable of reading and writing huge amounts of data with very tight latencies. With today’s launch of Aerospike version 8, the NoSQL database company has completed its journey to handle the flip side of the enterprise coin: Ensuring full transactional consistency.

Aerospike’s journey to delivering full ACID (atomicity, consistency, isolation, durability) guarantees began in 2018. In that year, the company shipped a release of the distributed database that guaranteed strong consistency for individual reads and writes at the record level, or linearizability.

However, since one transaction may utilize multiple reads and writes, the transaction as a whole did not have consistency guarantees. That meant that customers that demanded transactional consistency had to write additional application code to ensure the integrity of transactions.

With version 8, Aerospike has expanded its consistency guarantees to support the entire transactions. That so-called serializability now provides consistency guarantees for multiple changes to multiple records within the same transaction, says Aerospike CTO and founder Srini Srinivasan.

Have Your Cake…

Support for full ACID transactions is an important feature for some types of customers, particularly large banks and financial services institutions. While Aerospike has had success in that market, those customers have requested Aerospike deliver native support for transactions to alleviate their burdens in supporting the code themselves, Srinivasan said.

“Over the years, we focused a lot on high performance initially to capture a portion of the market, and then when we added strong consistency,” the database CTO told BigDATAwire. “We are basically expanding the capabilities of the high-throughput, low-latency market to also have a database which can provide that high performance while not compromising on consistency.”

With full ACID support, Aerospike version 8 opens the door to serving a new class of applications in financial services and consumer-facing markets. Customers that previously had to spend millions of dollars to install high-speed caches in an attempt to speed up standard relational databases will now be able to simplify their architectures with Aerospike, Srinivasan said.

“We already are the highest performing databases for a class of applications, especially consumer-side applications, which typically address tens to hundreds of millions of consumers, in some cases a billion even,” he said. “But for those systems, the traditional approach has been that you have to compromise on severely on performance in order to provide consistency.

“We worked very hard in maintaining that performance while also providing these traditional database features,” Srinivasan continued. “Thirty to 40 years ago, Oracle and relational databases–and even IMS before that–had transaction concepts, but they don’t provide the high performance required. The journey we’ve had is starting with the high-performance first and then adding consistency at the single-record level and now with reliability at the multiple-record level.”

…And Eat It Too

The ACID guarantees are provided for all data types supported by Aerospike, from key-value and JSON documents all the way to graph and vector data types, Srinivasan said.

“It’s all about not having the application writer have to solve these problems at their level and for the database,” he said. “We use the transaction support underneath, which enables the whole system to become more robust.”

Some of Aerospikes customers in telecommunications could streamline their application architecture by upgrading to version 8. For instance, one telecommunication company with multiple lines of business is forced to maintain separate accounts for the same customer because of limited support for serial transactions in the database. With Aerospike version 8, they’ll be able to combine those accounts into a single record, Srinivasan said.

There are two types of customers that will really be able to use the ACID transaction support, the CTO said. The first are existing customers, such as the telecommunications firm, who are already running at scale but are forced to write complex code in the application to meet business requirements.

“The other ones are people who always needed these kinds of transactional features with strict serializability, but were not able to use Aerospike for high performance applications,” Srinivasan said. “Those would be completely brand new customer…on the consumer oriented and real-time application space.”

A Legacy of High Performance

Large cost-savings could be had for customers who tried to speed up traditional relational databases that offered strong consistency guarantees but lacked the scale of a fast database like Aerospike.

“We have cases where we have reduced system sizes from 4,000 nodes to 400 nodes by eliminating a cache layer and also compressing the server,” Srinivasan said. “That is one of our big differentiations over the years. Comparable systems for real-time performance need to put all their data in DRAM. Aerospike has this technology we call hybrid memory architecture where we use SSDs in real time to read data.”

(yucelyilmaz/Shutterstock)

With the advent of larger SSDs that can hold hundreds of terabytes of data, and sufficient DRAM and indexes, Aerospike has the capability to replace scale-out databases that are 100x bigger. That legacy of high-performance is Aerospike’s bread and butter. In fact, the largest publicly referenceable Aerospike deployment is able to push upwards of 100 million database transactions per second. (But the throughput is even higher for non-publicly referenceable clients, Srinivasan said).

That speed is one reason why the big public cloud companies are working with Aerospike to support workloads that other databases can’t handle, at least not without a significantly larger hardware footprint.

“The kinds of workloads that Aerospike handles, virtually no one else handles,” Srinivasan said. “Therefore, all the cloud providers would like to get a piece of the action, if you will, essentially to be able to support their customers on their clouds to run workloads with Aerospike.”

Aerospike is an open source project, and is licensed under an AGPL license. However, features like ACID transaction support and the hybrid SSD-DRAM storage architecture are only available in the enterprise version that is licensed by Aerospike. You can find more information at www.aerospike.com.

Related Items:

Aerospike Nabs $109M to Grow Data Biz Turbocharged by AI

Aerospike Is Now a Graph Database, Too

Aerospike Adds JSON Support, Preps for Fast, Multi-Modal Future

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon EventBridge Event Bus Cross-Account Event Delivery

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS recently announced a new feature for Amazon EventBridge that allows users to deliver events directly to AWS services in different accounts. According to the company, this enhancement enables the use of multiple accounts to improve security and simplify business processes.

Amazon EventBridge Event Bus is a serverless event broker that enables scalable event-driven applications by routing events between applications, third-party SaaS, and AWS services. The newly introduced feature lets users directly target services in another account without additional infrastructure. Chris McPeek, a Principal Solution Architect at AWS, explains in an AWS Compute blog post:

With this new EventBridge feature, you can deliver events directly from the source event bus to the desired targets in different accounts. This simplifies the architecture and permission model and reduces latency in your event-driven solutions by having fewer components process events along the path from source to target.

For example, users can route events from their EventBridge Event Bus to a different team’s SQS queue in another account, with the receiving team only needing to grant Identity Access Management (IAM) permissions for access. Events can be delivered across accounts to targets that support resource-based IAM policies, including Amazon SQS, AWS Lambda, Amazon Kinesis Data Streams, Amazon SNS, and Amazon API Gateway.

(Source: AWS Compute blog post)

The company recommends enabling cross-account event delivery by establishing mutual trust between source and target accounts. Source event bus rules must use an AWS IAM role to send events to designated targets, achieved by attaching an execution role to those rules.

Targets in different accounts need a resource access policy to receive events from the source account’s execution role. Targets like Amazon SQS queues, Amazon SNS topics, and AWS Lambda functions support this process.

Having an IAM role in the source account and a resource policy in the target account allows for fine-grained control over the PutEvents action. Users can also define service control policies (SCPs) to regulate who can send and receive events in their organization.

To set up cross-account event delivery (assuming the source event bus exists), users can follow these three steps:

  • Target account: Create a delivery target (e.g., SQS queue).
  • Source account: Configure a rule for event delivery, set the target SQS queue ARN, and attach an execution role with permissions to send messages.
  • Target account: Apply a resource policy to the SQS queue to allow the source event bus execution role to send events.

Yan Cui, a Serverless Hero, tweeted on X:

This is AWESOME! EventBridge now delivers events to cross-account targets directly, without having to send them to the default bus in the target account first.

With Cross-Account Event Delivery, AWS brings another feature to the service after adding features like AppSync Integration. In a LinkedIn post, Sheen Brisals, an AWS Serverless Hero, stated:

In a way, this feature now pushes EventBridge to become a ‘true’ enterprise event-streaming platform. There are still gaps to fill, but we are getting there.

Users can find more information and guidance on Amazon EventBridge on the documentation pages and GitHub repository. In addition, more details for the pricing of Event Bridge are available on the pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Unveiling the Tech Underpinning FinTech’s Revolution

MMS Founder
MMS Wojtek Ptak Andrzej Grzesik

Article originally posted on InfoQ. Visit InfoQ

Transcript

Grzesik: The core question that we wanted to start with is, how is it that some organizations can actually deliver software and do it quite well and do it consistently, and some can’t? We will not explore the can’t part, we will explore the yes can and how is it possible, and what are the shared experiences that we have with Wojtek in the places we worked at? The answer, usually, it’s not something on the surface, so it’s not trivial, which you probably know. One thing that we found to be at the core is something that goes on to culture.

The Invisible Engine of Success: Culture (Westrum’s Model)

Ptak: Speaking of the culture, we really wanted to talk to you about one of the concepts that really fits well into the topic, Westrum’s Organization Culture. We really wanted to use Westrum’s culture model. It’s well ingrained into DevOps as well. If you look for DORA and Westrum, you will find really good reading materials on the DORA website regarding the Westrum culture model. We really wanted to touch briefly on it. You can think of what kind of culture model are you in? Westrum spent his scientific career working in organizations that really work well, and that’s how also we know about his work in the DevOps realm. The model consists of three types of organizations. First is pathological. These are the types of organizations that there are several features of them that tell you that, actually, they’re really pathological.

One of them would be that, for instance, cooperation is really low between different teams, different departments. We know Conway’s Law, of course. Messengers, meaning the people, whistleblowers, they will be shot on sight. Meaning, of course, we will reject them, and so on. This is the type of organization that is most likely to die in the current world because, of course, collaboration between teams is very low. They do not innovate very well. The second type of organization that Westrum described was bureaucratic. This is the type of organization which is really rule oriented. We have rules and we stick to the rules, process heavy, bureaucratic heavy. Looking at the same features, you can guess what it means. Collaboration and cooperation will be quite modest. Messengers, they will be neglected.

Probably, if the rules approve, and they play by the rules, it’s fine, but they will be most likely neglected. Very important, of course, if there is an incident, we look for justice, usually, in this type of organizations.

Grzesik: Its responsibilities are narrowing, so that the scope that people take onto themselves is narrowing further down, so that eventually it ends up in the previous one where nobody is responsible for anything. Here there is still some, but not too much.

Ptak: There are different types of organizations that we really wanted to look into and why they’re successful. The ones that you really want to work for. Very high collaboration and cooperation. Messengers. We train people to be whistleblowers. We train people to look for opportunities to learn. We look for every failure as the opportunity for learning. We really train people to do it.

Grzesik: Organizations seek signals wherever they appear, and they want it. They do it consciously because they absolutely want to be aware of what’s happening, and they want to make decisions based on that.

Background

My name is Andrzej Grzesik. I’ve been head of engineering, principal engineer. I now build distributed systems at a company called LMAX. It’s an exchange that does high-performance Java thing, nanoseconds counting Java. I like building systems. I’m proud to be a Java champion. I also run a conference, Java User Group, speak at conferences. I like technology for some strange reasons.

Ptak: I’m Wojtek. I’m former CTO. I also had my own company. Now I’m an engineering executive working with Revolut for 2 years, almost exactly. I’m responsible for Revolut Business. If you use Revolut Business, I’m the guy to talk to about the bugs, if you have any, of course. I’m also co-host of a community initiative called CTO Morning Coffee, where we really want to train the next generation of engineering leaders.

Revolut has a family of products. You probably know retail. It’s very popular in England. As far as I know, we’re number one in England. Business is also, as far as I remember, the number one B2B solution. We also have an app called Trader coming very soon in a reshaped form. That also will be separate application and Revolut form a junior. We’re definitely experiencing hyper growth.

Two years with Revolut, we grew two and a half times since I joined. We have over 40 million retail customers. In the business itself, for instance, that’s almost 70% now, year-over-year, and really accelerating. That’s important for me, because that sets the context that I’m working in a company that really grows really fast and it’s actually accelerating the growth. At the same time, it’s a very rapid product development. Usually, teams will have at least several deploys per day to production. Lead time for changes, one of the DORA metrics, is usually way less than three hours.

Grzesik: When I joined Revolut, I was head of backend engineering. Backend was 120 people, when I left it was 400, in under 3 years. That’s quite a growth. I’m quite proud of the things that were there about all the examples about Revolut, what he is going to speak about.

How to Measure Culture

Ptak: Coming back to Westrum organization, there is actually a practical, a first thing that we want to tell you so you can recognize the type of the organization, how to measure the culture. Ask your team what type of questions to ask. From strongly disagree, to neither agree, to strongly agree. Ask the following question.

Grzesik: If you’re a leader, run a survey across all of your teams in all departments, and you’ll get signals. Those questions, this information actively sought, how do people rate it from 1 to 5, 1 to 7, whatever scale you want, something that gives you a range. Then, if you notice good spots, good, if you notice bad spots, maybe teams, maybe departments, maybe some areas, you will have places to begin.

Ptak: Hopefully you recognize Gene Kim, one of the people who really started DevOps revolution. He has a podcast, and we definitely recommend. There are two episodes with Dr. Westrum.

Conway’s Law and Scaling Architecture

With companies like that, let’s talk about Conway’s Law and scaling the architecture. As we discussed, we are at the hyper growth scale. How to scale? Usually what would happen is you see something like this. What do you see?

Grzesik: A famous picture of teams, complexity, services, whatever you call it is there. Something that’s not immediately visible is the amount of connections, and how do you get from one far end to another? That’s actually a problem that organizations, as they scale up, get into. That’s something that I like to call problem of knowledge discovery. How do we know what we know?

How do we know who knows what? How do we know who are the good people to ask a question about the service, about how the code is oriented? Who are the people who get approval from? All of those questions. What services do we have? How reliable are they?

Ptak: If I have an incident, why do I have it? How many services that I was dependent on are in the chain? Between the database and my endpoint, how many services are truly there?

Grzesik: If you have a payment flow that needs to be processed, which services are on the critical path, and so on. For the backend services, there was even a talk about Spotify’s Backstage at QCon. Backstage, if you haven’t been there, it looks like that. It’s a catalog of services which has a plug-in architecture, which gives an information radiation solution to the problem. That’s very nice and awesome because it allows services to be discovered, so people can know what services there are in the organizations. What’s the description? What’s the role? What’s the responsibility? What’s the materiality? Which means, what happens if it goes down? How important is it to business operation? What are the SLOs, SLAs? Aspirational and contractual expectations of quality? How do we monitor? How do we get there? Upstream, downstream dependencies.

Basically, what’s the shape of the system? If you have above some level of services, you want that, otherwise, it’s hard to fish out from the code. Anything else that’s relevant. Backstage solves it for many people. Backstage has plugins, but not everybody uses Backstage. What does Revolut use?

Ptak: Revolut has its own solution, and I’m going to talk about a couple of points which are important. It’s called Tower. It gives us technology governance, so everything is described in the code. It’s trackable. It’s fully auditable. It’s fully shareable. It looks like that, nice interface. I can go there, look for any service, any database, pretty much any component in the infrastructure, and get all of the details which we discussed. Including the Slack angles of the teams, including the Confluence pages with the documentation, SLO, SLAs, logs, CI/CD pipelines. I know everything. Even if we have this massive amount of services, I know exactly what to look for, the information. Regarding the dependencies, here it is.

For instance, I can get all of the dependencies. We’re also working on an extension which will allow us to understand also the event-based dependencies, so asynchronous. That’s a big problem in the large distributed system. That’s actually coming very soon. As a leader of the team, I can also understand all of the problems that I have by several scorecards. We can define our own also scorecards, so I can, for instance, ensure that I have full visibility. What teams? How do they work? How do they actually maintain the services?

Systems Thinking

Coming back to our example, what else do you see?

Grzesik: We have a beautiful picture and we have a system, but as we build, as we have more services or we have more components, we have a system which is complex, because the business reality that we’re dealing with is complex. Now that we’ve introduced more moving parts, more connections, we’ve made a complex system even more complicated. Then, how can we deal with that? There is a tool that we all very much agree that is a way to go forward with that, and that tool is systems thinking.

Ptak: Systems thinking is a helpful model to understand the whole actually system that we’re talking about, for instance, as a FinTech bank solution. Complexity, as Andrzej said, can come from, for instance, compliance, [inaudible 00:13:47]. Complication, it’s something we’re inviting. There are two important definitions that I really wanted to touch on. In the systems thinking we have one definition, which is randomness. Randomness of the system means that we cannot really predict.

Grzesik: It’s things beyond our control. Things that will manifest themselves in a different random way that we have to deal with, because they are part of our team.

Ptak: They’re unpredictable. Or we see that as a noise of data. As I said, there is complexity, which is there by design. For instance, onboarding, in the business we’re presenting in over 30 countries. Onboarding any business is complex by definition. We cannot simplify it. It’s complex because, for instance, you need to support all of the jurisdictions. You need to make sure that you’re compliant with all of the rules. That is very well described in several books. The one that we’re using for this example is Weinberg. Weinberg is a super fruitful author, so a lot of books. That one comes from, “An Introduction to General Systems Thinking”. He proposed a model where there are three types of complexities in our systems.

Grzesik: The very first one, the easiest one is organized simplicity. That’s the low randomness, low complexity realm of well understood things. It can be things that we conquer with grunts. It can be things that we conquer with non-stack, no known services, problems that we know how to solve. They are business as usual. They are trivial. There is nothing magical there. There shouldn’t be anything magical there. If we keep the number of them low, and we keep them at bay, they are not going to complicate our lives too much.

Ptak: If you make things more complicated, as you can see on the axis, so introduce randomness, you will get to the realm of unorganized complexity. You will get a high randomness. If you have many moving parts, and each of them introduce some randomness, they sum up, multiply even sometimes. The problem is that actually the system gets really unorganized. Our job is to make sure that we get to the realm of organized complexity.

Grzesik: Which is where our system organizes. Business flows are going to use this technology in a creative way to solve a problem. That’s what we do when we build systems, not only technical, but in the process and people and interactions and customer support sense of things, so that a company can operate and people can use it, and everybody is happy.

How Do We Introduce Randomness into Our Systems?

There is a problem, as the system grows, it’s going to have a more broad surface area, and that’s normal because it’s bigger. Which means there’s going to be randomness that is going to happen there, and then there is some randomness that people want to introduce, like having multiple stacks for every single service.

Ptak: Can I have another database?

Grzesik: Can I put yet another approach to solving the problem that we have, because I like the technology for it?

Ptak: Can I get another cloud provider? You know where it’s going. How do we actually introduce that randomness into our systems? How do we make our system complicated and therefore prone slower? Because you need to manage the randomness. From our perspective, as we discussed, we see three really important sources of randomness, where you invite the problems into your organization. The number of frameworks and tools that you have. If you allow each team to have their own stack, the randomness and the complication of the overall system, all of the dots that you can see connected, goes off the roof.

Grzesik: Then you have problems like, there is a team that’s used to Java that has to read a Kotlin service, maybe they will be ok. Then they have to look at a Rust service and a Go service, and then, how do I even compile it? What do I need to run it? That gets complex. If there is a database that I know how it works, I use a mental model for consistency and scaling. Then somebody used something completely different. It becomes complex. Then there is an API that always speaks REST. Somebody puts a different attitude API in there, then you have to model to. There is that complexity, which is sometimes not really life changing, but it just adds on.

Ptak: Another thing is differences in processes. A lot of organizations will understand agile as, let people choose whatever they want to do, make sure that they deliver. A lot of people will actually have their own processes. The more processes, the more different they are, the bigger problem we have, the bigger randomness. Same with the skills.

Grzesik: Same with skills. Both of those areas mean that the answer to, how do we solve a problem, or how do we reach a solution to a problem in our area, will differ across teams. That means that it’s harder to transfer learnings, and that means that you have to find two answers in any organization other than one, and then apply the pattern in every single place. If you have a team that follows DDD, you know that you’re going to get tests. If there is a team that would like to do testing differently, then the quality of tests might differ across solutions. What we are advocating, what we have experience of not doing, is automating everything.

Ptak: You start to take care of the things that are not important to your business. You start to use the energy of your teams not to build stuff, not to build your products, not to scale, but to solve the problems that are really not important to your business. As somebody said, actually, we should be focusing on the right things.

What’s the Revolut approach? Simplified architecture by design. We try to really reduce the randomness. Simplicity standards, they’re being enforced so you cannot really use whatever that you want. We enforce certain set of technologies. I’ll touch on it. It’s enforced by our tooling. We really optimize for a very short feedback loop. First, to talk about architecture. It’s designed, supported by the infrastructure, and our architecture framework, service topology. Every service would have its own type predefined. Every definition will contain how it should behave, how it should be exposed, what it should be integrated with. How does it integrate? If and how it integrates with the database, and so on? The important one will be frontend service, so the resources definitions.

Flow service, it’s the business logic orchestration. State service, which is a domain model. That gives us actually the comfort that we know exactly what to expect if you reopen the service. You know exactly what will be there, how it will be structured, and so on. Revolut’s architecture is super simple in this way. It’s really simple. It’s to the level where it’s really vanilla Java services with our own internal DDD framework, and deployed on Kubernetes. That’s all. We use some networking services from Google. Processes layer, that’s interesting. Postgres is used as the database, as the event store and event stream. We have our own in-house solution. Why? Why not use Kafka and so on, Pub/Sub? Because we know exactly how to deal with databases.

We know exactly how to monitor them, how to scale them. If you introduce a very important component to banking, such as the technology that is not exactly built for these purposes, you introduce the randomness, and you will need to build the workarounds around that randomness. Data analytics, of course, there is a set of the features. Here is an example, coming back to my screenshot. These things are enforced.

Grzesik: A stack of Java service, CI/CD has, what is it? Is it a template?

Ptak: It’s a definition, and we know exactly what to expect. The whole CI/CD monitoring will be preset. When you define your service, everything will be preset for you. You don’t need to worry about everything that should be not a problem for you. You’re not solving a business problem, but worrying about, for instance, CI/CD or monitoring. You need to focus on the business logic. That’s what we optimize for our engineers.

Heuristics of Trouble

Grzesik: We have information being radiated. We have things that are templated. We have a simple architecture. Then, still, how do you do it well? How can you answer that question?

Ptak: We wanted to go through heuristics of trouble. We really want to ask you to see how painful it is for you. We have some examples that you probably can hear in your teams. The first one would be.

Grzesik: Why do people commit to our code base? Have you ever heard it?

Ptak: That will be sign of blurred boundaries. The problem of no clear ownership, conflicting architecture drivers that lead to I don’t care solutions.

Grzesik: That’s this randomness that we mentioned before. It’s ok for people to commit to other services. Absolutely, it’s a model that the company I work at uses. I think your place also uses that. The thing is, somebody should be responsible. Somebody should review it. That’s the gap here. If somebody commits without thinking, that’s going to be strange.

Ptak: Another thing that you can hear in the teams. I wonder, when did you hear it? It’s them, whenever something happens.

Grzesik: “It’s them. This incident is not ours. They’ve added this. They should fix it”. If you connect it with Westrum’s bureaucratic model, this is exactly how it manifests in a place. In the grand scheme of things, if everybody works in the same company and everybody wants shared success of the company, this is not the right attitude. How do you notice? By those comments, in Slack, maybe in conversations, maybe by the water cooler, if you still go to the office.

Ptak: It’s a lack of ownership. We see the blame culture, fear of innovation, and actually good people will quit. The problem is the other will stay. That’s a big problem for the organization. Really, the same, of course, deployable modules and teams. That’s important also to understand regarding the ownership. Another thing that you may hear, let me fix it.

Grzesik: There is a person, or maybe a team, that they’re amazing because they fix all the problems. They are so engaged. They run on adrenaline. They almost maybe never sleep, which fixes the problem. You’ve met them, probably. The problem that they generate is they create knowledge silos, because they know how things work, nobody else does. They also reduce ownership because, if we break something, somebody else is going to fix that. That’s not great. Because of how intensively they work, they risk burnout, which is a human thing, and it happens. Then somebody can operate at this pace and at this level for maybe a couple of months, maybe a couple of years. Eventually something happens, and they are no longer there. Maybe they decided to go on holidays in Hawaii for a couple of months: this happens, a sabbatical.

Ptak: God bless you when you have the incident.

Grzesik: Then you have a problem, and then what do we do? We don’t know, because that person or that team is the one that knows.

Ptak: Very connected. You have an incident, and you’ve heard, contact them on Slack. The problem is, you have components which are owned by a central team, and only central team keeps the knowledge for themselves. It’s always like a hero’s guild. They will be forcing their own perspective. They will be reducing, actually, the accountability and ownership of the teams. They will actually be the bottleneck in your organization.

Another very famous, is you have a bug incident, you will contact the team and you hear, create a Jira ticket. That’s painful for all of you. That’s a good sign of siloed teams and conflicting priorities. It means that there is a very low collaboration. We don’t plan together. We don’t understand each other’s priorities. We duplicate very often efforts. How many times have you seen in your organizations, they won’t be able to build it, we need it, so we’ll build our own, or we’ll use our own, whatever. There goes the randomness. Another one that you may hear is that you ask the team to deliver something and they say, I need to actually build a workaround in our framework, because we’re not allowed to do it with our technology.

Grzesik: The problem that we have is not, how do you sort a list, but how do you sort a list using this technology, that language, using that version of the library, because it’s restricting on this database?

Ptak: That’s actually when technology becomes a constraint. It’s a very good sign that the randomness is really high. You are constrained by the technology choices that you made. There are probably too many moving parts.

Grzesik: Another aspect might be how it manifests. People will say that, I’m an engineer. I want some excitement in my life. I’m going to learn another library, learn another language. The purpose of the organization is to build software well, and you can challenge that perspective. People can be proud of how well, and last without bugs, execute software delivery, but it requires work on the team. This is a very good signal. Another one is, hammer operators. If you’ve met people who will solve every single problem using the same framework and that same tool, any technology that they are very fond of, even if it doesn’t fit or even if they made the choices for technologies before knowing what the problem really is, that’s a sign of a constraint being built and being implemented.

Practical Tips for Increasing Collaboration and Ownership

Ptak: Actually, there are some good news for you, so we’re not made to suffer. Practical tips from the trenches. How to increase collaboration and ownership.

Grzesik: We know all the bad signals, or what things we can look for, so that we know that something is slightly wrong, or maybe there is a problem brewing in the organization. The problem is, it’s not going to manifest itself immediately. It’s going to manifest itself in maybe a year, maybe two years down the line. Some people will have gone. Some people will have moved on. We will have a place that slows down and cannot deliver, maybe introduces bugs. Nobody wants that. How do we prevent it?

Ptak: First one is, make sure that you form it around boundaries. For instance, in Revolut, every team is a product team with their own responsibilities, very clear ownership, and most likely, with a service they own. I would recommend, if you know DDD, to go into strategic patterns and, of course, use business context. That’s very useful. The second thing is, there is a lot of implicit knowledge in the organization, make it as explicit as possible.

Grzesik: Put ADRs out there. Put designs out there. Don’t use emails to transfer design decisions or design discussions. Put it in a wiki. It’s also written, but it’s also asynchronous. In the distributed organization that we work with, that makes it possible for people to ask questions and comments, and know what was the error, and what was decided and why.

Ptak: If you’re a leader, I would encourage you to do ownership continuous refactoring. Look for the following signs. If there is a peer review confusion, who should review it, how we should review it?

Grzesik: How can you measure that? The time to review is long because nobody feels empowered, or the correct person to review it.

Ptak: Another one would be, hard to assign bugs. The ones that are being moved between teams. We do have it, but we really try to measure it and understand which parts of the apps have this problem.

Grzesik: I don’t have that problem because the place I work in does Spark programming. There is no need to do PR reviews. If you do pair programming, you actually get instant review as you pair, which is awesome.

Ptak: Of course, incidents with unclear ownership. Look for these signals. How to deal with situations where you really don’t know who should own the thing. There are a couple of strategies which I would recommend. Again, clear domain ownership. Then the second one, if we still cannot do it, is proximity of the issue. We can say, it’s close to this team’s responsibilities. They’re the best to own it.

Grzesik: Or, they are going to be affected, or the product that they are responsible for is going to be affected. Maybe it’s time to refactor and actually put it under their umbrella.

Ptak: Sometimes we can have central teams or platform teams who can own it, or in the worst-case scenario, we can agree on the ownership rotation, but do not leave things without an owner.

Grzesik: Sometimes something will go wrong. Of course, never in the place we work at, never in the place you work at, but in the hypothetical organization in which something happens.

Ptak: I would disagree. I would wish for things to go wrong, because that’s actually the best opportunity to learn.

Grzesik: It’s a learning opportunity. There is a silent assumption that you do post-mortems. If you do have an incident, do a post-mortem. Some of the things that we can say about post-mortems, for example, first, let’s start with a template.

Ptak: I know it might be basic, but it’s really important, teach your team to own the post-mortems. Have a very clear and easy to use template. That’s ours, actually. We have exactly, for instance, the important ones, the impact with the queries, or links to logs that I can use. We have several metrics to measure, so see how better we get to: so mean time to detection, mean time to acknowledge, mean time to recovery. We do root cause analysis, 5 Whys. The important thing, we will be challenged on that, and I will come back to that.

Grzesik: Also, what is not here is, who’s at fault? There is no looking for a victim.

Ptak: Because they’re blameless. We try to make them blameless.

Grzesik: It should be also accurate, which means they should give a story. It could be a criminal story or a science fiction story, depending on your take on literature, but they should give a story about what happened. How did we get there? What could we potentially do different, this is the actionable part, and we have to do it rapidly. Why? Because human memory is what it looks. We forget things. We forget context. The more it lingers, the more painful it becomes.

Ptak: We come back to Westrum pathological organizations. You can recall that probably that won’t work in such an organization. Couple of tips that I would have. Create post-mortems for all incidents. Actually, with my teams, we’re also doing almost incidents, near miss incidents. When we almost got an incident in production, amazing opportunity to learn. We keep post-mortems trackable. There is actually a whole wiki that contains all of the post-mortems’ links, searchable, taggable, very easy to track, to understand what happened actually, and how we could improve the system.

Grzesik: I also keep them in the wiki. If your risk team or somebody in the organization says that, post-mortem should be only restricted to the people that actually take part, or maybe they shouldn’t be public knowledge. Maybe you’re leaders, maybe you’re empowered in the correct position to fight it, or escalate it to your CTOs, this is the source of knowledge. This is a source of learning. It’s absolutely important not to allow that to happen, because that’s what people will learn from and that’s what influences people’s further designs.

Ptak: Drill deeper. Root causes, we actually peer review our post-mortems, and we actually challenge them. It’s a very good learning opportunity for everyone, actually. I would encourage this as a great idea.

Grzesik: A very practical attitude. Find the person who is naturally very inquisitive. It can be a devil’s advocate kind of attitude. They are going to ask questions. They are not going to ask questions when people are trying to describe what happened, but they will ask the uneasy questions. That’s a superpower, sometimes, in moderation. If you have such an individual, expose them to some of the post-mortems, figure out a way of working together. That attitude is absolutely very useful.

Ptak: Two last items is, track action items. The worst thing is to create post-mortem and let it die, or a bureaucratic, I do it for my boss. Celebrate improvements. It might be very obvious knowledge, but if you want to improve the organization and improve the architecture, so Reverse Conway Maneuver, actually, I would recommend post-mortems as one of the things that really teach people to own things and to understand them. May be basic for some, but actually very useful.

Grzesik: Systems that we write, they will have dependencies internally, they will have them externally. That’s something that we need to worry. Making it explicit, is knowing what they are. Then, that also means that you can have a look at, how is my dependency upstream and downstream doing? What are their expectations, aspirations, in terms of quality? Do we have circular dependencies? You might discover it if you have, for example, a very big events-driven system. If you’ve never drawn what is the loop, or, certain processes, which services they follow. You might get there. Then it’s obviously harder to work.

Also, if you know what the dependencies for you, which are critical are, then you can follow their evolution. You can see what’s happening. You can maybe review PRs, or maybe just look at the design reviews that people in those services do. Of course, talking to them. In a distributed, very big team, talking on scale, to an extent, which means RFCs, design reviews, architecture decision records, whatever you want to call it, same thing. DDD integration patterns offer some ideas here. Since I mentioned ADRs or RFCs, what we found working really well is very specific takeouts of doing them.

Ptak: Challenge yourself. We call it a sparring session or SDR review. You invite people who are really good in being a devil’s advocate, and you, on purpose, want to actually review your RFC or ADR, and make sure that it’s the best.

Grzesik: I will recommend The Psychology of Making Decisions talk, if you want to make your RFCs better, because it already mentions a lot of things that we could have included here, but we don’t have to because that was already mentioned.

Ptak: In the large organization with a lot of dependencies, there is a question on how to make sure that you involve the right people with the right knowledge. Of course, that can challenge you, because you might be dependent on the system that they know, and you want them to challenge you, if you take into consideration all of the problems in that system. What can help you? What’s the tool that can help you? It’s the RACI model. Use it also for the RFCs.

Grzesik: What is the RACI model? RACI model is grouping or attributing different roles with regards to a problem, to a category of being responsible, accountable, consulted, and informed. Who is responsible? The person who needs to make sure that something happens. A team lead, a head of area, somebody like that. Accountable, who is going to get the blame if it doesn’t get done and if it doesn’t get done well. Again, team lead, head of area, maybe CTO, depending on the problem. Consulted, who do you need to engage? Maybe security. Maybe ops. Maybe another team that you’re going to build a design with.

Ptak: These are your sparring partners.

Grzesik: Those people you will invite into an ADR. Then people who are informed are the people who they will learn what the consequences are. If they want to come, sure they can, but they don’t have to. Which means, if you’ve done a few ADRs using this model, know which people to invite, then you almost have a template, not only for the document, what an ADR should look like, but also, who are the people to engage and what kind of interactions you expect from every single group. Look at the benefits. What are they?

Ptak: Some of the benefits, of course, it’s clear, explicit collaboration and communication patterns. It really improves decision making. We don’t do maybes, but we know exactly who to involve and who to, for instance, consult with. It really facilitates ownership. It’s very clear who should own it, who should be involved, who should be communicated about the changes. I would encourage you to review it regularly. A very typical example from Revolut would be, responsible is usually the team owning a feature. Normally, I would be accountable. Consulted, we make sure that, for instance, other departments, other heads of engineering, other teams, or CTO, if it’s a massive change, is consulted and informed. It can be, for instance, a department or a whole company, so we know exactly how to announce any changes, for instance.

Signals for Refactoring Architecture

Grzesik: We know how to make awesome designs.

Ptak: We know how to execute well, and your architecture needs scaling.

Grzesik: What could possibly go wrong?

Ptak: You have the system, and we need to talk about the thing that we would call architecture refactoring. Once again, we’ve got heuristics of pain. The first sentence that you may hear, it takes forever to build.

Grzesik: If you’ve ever seen it, if people on your team say that it’s a slow deploy pipeline, so hours, not minutes, people releasing high percentage of red builds, those are the signals. What is the consequence? The feedback loop slows down, and also the time to deploy slows down, which means small changes will not get into production so quickly, which means there is a tendency to cool down and slow and be a bit bureaucratic, and maybe run the tests again. Maybe run the test again and again, because some of them will be flaky or intermittent, however you want to call them. That’s a problem to track.

Ptak: You may hear something like that. It’s Monday and people are crying, for any reason.

Grzesik: People who dread going back to work.

Ptak: That’s usually a sign. Simplification, of course. A system hard to maintain where simple changes will be difficult. It’s a very steep learning curve, onboarding curve. You need to repeat all of the code, possibly because you don’t know if you take, of course, spaghetti from one side of the plate, the meatball will fall on the other side. It’s typical in a system hard to maintain.

Grzesik: Who are these signals? Senior lead engineers, people who have been developing software, they know at the back of their head, it should be a simple change. Then they learn that something isn’t right, it actually is more complex. They’ve spent their third week of doing something very trivial. That’s a signal. That’s something that is very hard to pick up on a day-to-day basis, because we want to solve the problem. We want to get it done, whatever it is. Then we’ll do the next thing that we want to get done. Service will capture that. Or people we onboard, ask them after a couple weeks, maybe months, see, what is their gut reaction? Is it nice to work with? Is it nice to reason about? Do they get what’s happening?

Ptak: Another quote that you may hear, that there is a release train to catch. That’s a very good sign of slow time to production. We have forced synchronization of changes. We need teams to collaborate together to release something. That means that we have infrequent releases. Of course, that means that we’re really slow and we are not innovating quickly enough. That forces the synchronization of changes between teams, which means that actually they’re not working on the most important things at the time. Another sign and another quote that you can hear, we’re going to crash.

Grzesik: Performance issues, slow response times, frequent crashes. The number of errors in services. Of course, you can throw more services, spin up more, to work around it. It’s going to eventually lead to, hopefully, a decision of scaling the architecture. We need to scale, not add more regions, more clouds, but something different.

Ptak: How to scale and refactor architecture, tips. Of course, every organization is in this situation. The important thing is, when you have a large system, and of course, moving many components as we’ve shown you. There is a temptation that, of course, when you want to refactor something that you make a decision to, “This time we’ll make it perfect, every Greenfield project. This time, it will work”. Usually, what you really want to do is you want to review the CI/CD, for instance, the patterns, the infrastructure. We can now rebuild the whole thing and it will be shiny. The problem is, it doesn’t work.

Grzesik: What do you do instead? You might have heard of theory of constraints. You might have been doing software optimization.

Ptak: Let’s apply it. First thing is, you need to identify what is the biggest pain of all.

Grzesik: Some examples, tests are taking too long, CI/CD too long, and so on. You can definitely come up with more examples. You pick one, and then you try to work with it, which is formally called exploit the constraint.

Ptak: You focus everything to it. You ignore the rest, and you fix it, but to the level where you actually remove it, and you actually get even better. It not only stops being a constraint, but actually you remove it for a longer period of time as a constraint.

Grzesik: Then, the very important last element, you take this list from scratch, because the previous order will no longer apply, most likely. How do you then use it?

Ptak: There is a second approach that we can take. Let’s say you don’t have a very large pain point. It’s not like you’re crying, but you really want to optimize for something that you know that you will need, for instance. It’s called the fitness function. We really recommend to look for several examples. Can be, we want, for instance, builds to be 10% quicker by the end of the quarter. That’s how it works. You make a metric, you devote everything to fixing it. Then, you work on it. What you can do is combine them. Let me tell you about how we did it. Last two quarters, we’re working on Revolut business modularization.

The biggest pain was build time. It took over an hour for us to build it, over two hours to release it to production. For us, it was way too slow. It was massive, over 2000 endpoints, nearly 500 consumers, nearly 20 teams involved in the project. That’s exactly how we did the theory of constraints. We focused on the build time. Now every team has their own service. We reduced the build times by 75%. Massive. We optimized for one thing only. We haven’t, for instance, refactored the architecture.

The Cold Shower Takeaways for Uncomplicating Architecture

Grzesik: You’ve probably heard this, a bad system will beat a good person every time. If the organization you work in has shown it to you in history or previous past lives, it’s a learning point. Which means the question that we keep asking ourselves when we try to design how teams thinks processes work in the places we work at, is, can we build a system? Or, how do we build a system that supports people to do the correct thing and keeps giving the right signals?

Ptak: To build on the fragile system, because that’s what we’re talking about, learn nature. Nature has its own way. Apply stress to your organization, to your system, to your architecture. Look for things to simplify, unify, and automate. We gave you several tools how to do it. Learn from the nature. Actually, you want to apply stress. You want to apply stress to the architecture, to your teams. That’s very important.

Grzesik: Crucial element of that is short cycle time. Humans have a limited attention span, which means if the change and the effect is something we can observe, we’re going to learn from it. If it takes years or months from one to the other, it’s hard, and we’re probably going to do it less.

Ptak: We gave you several tips how to build organization growth mindset. Definitely look for them. They may seem basic, but if you do them well and connect them, they actually will lead to teams improving themselves to own better, and you will be able to do what is famously called the Reverse Conway’s Maneuver. Make sure that ownership is explicit. You really want to look for the things that nobody owns or the ownership is wrong, therefore refactor the ownership. We gave you several tips. That’s how you can work on the Westrum different factors.

Grzesik: If you do that, you can have a system in which a team, or a pair, a couple of engineers can make a change, can make a decision that they need to deploy to production and do it. Which means, if they notice that, they take the correct action at the correct lowest possible level of complexity. Then the learning is already in.

Ptak: A lot of companies would say, we’re agile, and we can do different things. I would say, don’t. Impose constraints. If you really want to be fast, if you really want to scale in the hyper scale fashion, scale up, impose constraints on the architecture, tooling, so people focus on the right things. Remove everything that is non-essential, so you reduce the randomness and the complication of the system. For the most important part, remain focused. A lot of organizations, of course, understand agile very wrong. For instance, on the retrospectives, teams are really good in actually explaining why they haven’t delivered. That’s the other side of the spectrum that we could be hearing.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Sees Unusually Large Options Volume (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of some unusual options trading on Wednesday. Investors bought 23,831 put options on the stock. This represents an increase of approximately 2,157% compared to the typical daily volume of 1,056 put options.

Wall Street Analysts Forecast Growth

A number of equities analysts have issued reports on MDB shares. Monness Crespi & Hardt lowered shares of MongoDB from a “neutral” rating to a “sell” rating and set a $220.00 price target for the company. in a research report on Monday, December 16th. Wedbush raised shares of MongoDB to a “strong-buy” rating in a report on Thursday, October 17th. Needham & Company LLC increased their price target on MongoDB from $335.00 to $415.00 and gave the stock a “buy” rating in a research report on Tuesday, December 10th. JMP Securities reissued a “market outperform” rating and issued a $380.00 price objective on shares of MongoDB in a report on Wednesday, December 11th. Finally, Morgan Stanley raised their target price on MongoDB from $340.00 to $350.00 and gave the stock an “overweight” rating in a report on Tuesday, December 10th. Two investment analysts have rated the stock with a sell rating, four have assigned a hold rating, twenty-three have given a buy rating and two have issued a strong buy rating to the stock. According to data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and an average price target of $361.00.

Check Out Our Latest Stock Analysis on MongoDB

Insider Activity

<!—->

In other news, CAO Thomas Bull sold 1,000 shares of the stock in a transaction on Monday, December 9th. The stock was sold at an average price of $355.92, for a total transaction of $355,920.00. Following the completion of the sale, the chief accounting officer now owns 15,068 shares in the company, valued at approximately $5,363,002.56. This represents a 6.22 % decrease in their position. The sale was disclosed in a document filed with the SEC, which is accessible through this hyperlink. Also, Director Dwight A. Merriman sold 1,319 shares of the stock in a transaction dated Friday, November 15th. The shares were sold at an average price of $285.92, for a total value of $377,128.48. Following the completion of the sale, the director now owns 87,744 shares of the company’s stock, valued at approximately $25,087,764.48. This trade represents a 1.48 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 42,491 shares of company stock worth $11,543,480. Company insiders own 3.60% of the company’s stock.

Institutional Inflows and Outflows

Hedge funds have recently modified their holdings of the business. Hilltop National Bank boosted its stake in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after purchasing an additional 42 shares during the last quarter. NCP Inc. purchased a new position in shares of MongoDB during the fourth quarter valued at approximately $35,000. Brooklyn Investment Group acquired a new stake in MongoDB during the third quarter worth approximately $36,000. GAMMA Investing LLC grew its stake in MongoDB by 178.8% in the 3rd quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock worth $39,000 after acquiring an additional 93 shares during the period. Finally, Continuum Advisory LLC raised its holdings in MongoDB by 621.1% in the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after acquiring an additional 118 shares during the last quarter. 89.29% of the stock is owned by hedge funds and other institutional investors.

MongoDB Price Performance

NASDAQ MDB opened at $282.03 on Thursday. The business has a 50 day moving average price of $268.90 and a 200 day moving average price of $270.32. The company has a market capitalization of $21.00 billion, a P/E ratio of -102.93 and a beta of 1.28. MongoDB has a 1-year low of $212.74 and a 1-year high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Monday, December 9th. The company reported $1.16 earnings per share for the quarter, beating analysts’ consensus estimates of $0.68 by $0.48. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $529.40 million during the quarter, compared to analysts’ expectations of $497.39 million. During the same quarter last year, the business posted $0.96 earnings per share. The business’s revenue for the quarter was up 22.3% on a year-over-year basis. On average, research analysts expect that MongoDB will post -1.78 earnings per share for the current fiscal year.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Investors Purchase High Volume of MongoDB Call Options (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw unusually large options trading activity on Wednesday. Stock investors bought 36,130 call options on the company. This represents an increase of approximately 2,077% compared to the average volume of 1,660 call options.

Analysts Set New Price Targets

Several analysts have issued reports on the stock. Truist Financial reiterated a “buy” rating and issued a $400.00 target price (up from $320.00) on shares of MongoDB in a research note on Tuesday, December 10th. Cantor Fitzgerald assumed coverage on MongoDB in a research note on Friday, January 17th. They issued an “overweight” rating and a $344.00 target price on the stock. Macquarie started coverage on MongoDB in a report on Thursday, December 12th. They issued a “neutral” rating and a $300.00 price objective for the company. Monness Crespi & Hardt downgraded MongoDB from a “neutral” rating to a “sell” rating and set a $220.00 price objective on the stock. in a report on Monday, December 16th. Finally, Barclays dropped their target price on MongoDB from $400.00 to $330.00 and set an “overweight” rating for the company in a report on Friday, January 10th. Two equities research analysts have rated the stock with a sell rating, four have issued a hold rating, twenty-three have given a buy rating and two have issued a strong buy rating to the company. According to data from MarketBeat.com, MongoDB currently has a consensus rating of “Moderate Buy” and a consensus price target of $361.00.

Check Out Our Latest Report on MongoDB

MongoDB Price Performance

<!—->

MDB opened at $282.03 on Thursday. The firm has a market capitalization of $21.00 billion, a PE ratio of -102.93 and a beta of 1.28. The business’s 50-day moving average is $268.90 and its two-hundred day moving average is $270.32. MongoDB has a 1 year low of $212.74 and a 1 year high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Monday, December 9th. The company reported $1.16 earnings per share (EPS) for the quarter, topping analysts’ consensus estimates of $0.68 by $0.48. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm had revenue of $529.40 million for the quarter, compared to the consensus estimate of $497.39 million. During the same quarter in the previous year, the firm earned $0.96 EPS. The company’s quarterly revenue was up 22.3% compared to the same quarter last year. As a group, sell-side analysts expect that MongoDB will post -1.78 EPS for the current year.

Insider Transactions at MongoDB

In other news, Director Dwight A. Merriman sold 1,319 shares of MongoDB stock in a transaction on Friday, November 15th. The shares were sold at an average price of $285.92, for a total value of $377,128.48. Following the completion of the sale, the director now owns 87,744 shares in the company, valued at $25,087,764.48. The trade was a 1.48 % decrease in their position. The transaction was disclosed in a legal filing with the SEC, which is available through this link. Also, CEO Dev Ittycheria sold 8,335 shares of the business’s stock in a transaction dated Tuesday, January 28th. The shares were sold at an average price of $279.99, for a total value of $2,333,716.65. Following the completion of the sale, the chief executive officer now directly owns 217,294 shares of the company’s stock, valued at $60,840,147.06. The trade was a 3.69 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 42,491 shares of company stock worth $11,543,480. Company insiders own 3.60% of the company’s stock.

Institutional Trading of MongoDB

Several hedge funds and other institutional investors have recently made changes to their positions in MDB. Nisa Investment Advisors LLC grew its holdings in shares of MongoDB by 3.8% during the third quarter. Nisa Investment Advisors LLC now owns 1,090 shares of the company’s stock valued at $295,000 after buying an additional 40 shares during the last quarter. Hilltop National Bank lifted its holdings in shares of MongoDB by 47.2% during the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after purchasing an additional 42 shares during the period. Avestar Capital LLC increased its holdings in MongoDB by 2.0% in the 4th quarter. Avestar Capital LLC now owns 2,165 shares of the company’s stock worth $504,000 after buying an additional 42 shares during the period. Tanager Wealth Management LLP lifted its stake in MongoDB by 4.7% during the 3rd quarter. Tanager Wealth Management LLP now owns 957 shares of the company’s stock valued at $259,000 after acquiring an additional 43 shares during the period. Finally, Rakuten Securities Inc. boosted its holdings in shares of MongoDB by 16.5% in the 3rd quarter. Rakuten Securities Inc. now owns 332 shares of the company’s stock valued at $90,000 after acquiring an additional 47 shares during the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI Launches Deep Research: Advancing AI-Assisted Investigation

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

OpenAI has launched Deep Research, a new agent within ChatGPT designed to conduct in-depth, multi-step investigations across the web. Initially available to Pro users, with plans to expand access to Plus and Team users, Deep Research automates time-consuming research by retrieving, analyzing, and synthesizing online information.

Unlike standard chatbot interactions, Deep Research operates independently for 5 to 30 minutes, browsing the web, interpreting content, and compiling reports with citations. Powered by a specialized version of OpenAI’s upcoming o3 model, it is optimized for reasoning, data analysis, and structured research. The tool is intended for professionals in knowledge-intensive fields such as finance, policy, and engineering, as well as users looking for comprehensive insights on complex topics.

Early evaluations indicate that Deep Research outperforms previous AI models in tasks requiring deep contextual understanding. On Humanity’s Last Exam, a benchmark that assesses AI across expert-level subjects, it scored 26.6% accuracy—more than twice the performance of previous OpenAI models.

Despite its capabilities, the tool is not without risks. AI-generated research can still be misinterpreted, especially when dealing with specialized subjects. Peter Ksenič, a designer and quality manager, cautioned:

Keep in mind, that if you do not know your topic, there is a huge risk of errors. Also if you don’t understand the topic, you can make misleading statements by bad interpretation of obtained knowledge.

Concerns about AI’s reliance on education and professional development have also been raised. Moses Maddox emphasized the importance of AI literacy:

We are spending so much time talking about what AI can do that we are not teaching students how to actually use it. Right now, students and young professionals are letting AI control them instead of the other way around. They’re blindly trusting AI instead of learning how to refine its outputs… AI is not going to replace them. Someone who knows how to use it better will.

OpenAI acknowledges these concerns and plans to refine Deep Research through iterative deployment. While it is designed to streamline complex research, the company emphasizes that AI should be used as a tool to enhance human expertise rather than replace critical thinking.

Access to Deep Research will expand in phases, with a more efficient version in development to support a broader user base. For now, it marks another step in AI’s evolving role as a research assistant.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Lombard Odier taps MongoDB to update banking with generative AI | Frontier Enterprise

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Lombard Odier, a global Swiss private bank founded in 1796, has partnered with MongoDB to further modernise its banking technology systems with generative AI and reduce technical complexity.

The generative AI-assisted modernisation initiative enabled Lombard Odier to migrate code 50 to 60 times quicker than previous migrations; move applications from legacy relational databases to MongoDB twenty times faster, leveraging generative AI; and automate repetitive tasks with AI tooling to accelerate the pace of innovation, reducing project times from days to hours.

The bank’s GX Program—a seven-year initiative designed to modernise Lombard Odier’s banking application architecture to respond to market developments—launched in 2020 with the goal of enabling quicker innovation, reducing potential service disruption, and improving customer experiences.

Lombard Odier chose MongoDB as the data platform for its transformation initiative. The bank initially decided to develop its portfolio management system (PMS) on MongoDB. The bank’s largest application  with thousands of users, PMS manages shares, bonds, exchange-traded funds and other financial instruments. 

MongoDB’s ability to scale was key to this system migration, as this system is used to monitor investments, make investment decisions, and generate portfolio statements. It is also the engine that runs Lombard Odier’s online banking application “MyLO,” which is used by the bank’s customers.

The bank engaged with MongoDB to co-build a Modernisation Factory—a service that helps customers eliminate barriers like time, cost, and risk frequently associated with legacy applications and eliminate technical debt that has accumulated over time—to expedite a secure and efficient modernisation. 

MongoDB’s Modernisation Factory team worked with Lombard Odier to create customisable generative AI tooling, including scripts and prompts tailored for the bank’s unique tech stack, which accelerated the modernisation process by automating integration testing and code generation for seamless deployment.

“To enhance Lombard Odier’s business strategy, we developed a technology platform that draws on the latest technological innovations to facilitate employees’ day-to-day work, and provide clients with individualised investment perspectives,” said Geoffroy De Ridder, head of technology and operations at Lombard Odier. 

“We chose MongoDB because it offers us a cloud-agnostic database platform and an AI modernisation approach, which helps to automate time-consuming tasks, accelerate the upgrade of existing applications, and migrate them at a faster rate than ever before,” said De Ridder. “Having up to date technology has made a big impact on our employees and customers while proving to be fast, cost-effective, and reducing maintenance overheads.”

In addition to PMS, Lombard Odier modernised multiple other applications from its existing Java application server to the bank’s next-generation framework. The bank then went a step further and worked with MongoDB to use generative AI on a marketing application called “Publications” to accelerate the code migration. 

The bank’s developers were also able to use Modernisation Factory generative AI-based tooling and products to feed into scenarios during regression testing and automatically generate new code much faster than before.

“Financial institutions with as much history as Lombard Odier undoubtedly have large, complex legacy systems that have been supporting the business for decades. However, it is important for organisations to constantly evaluate these systems to understand if they are still serving their best interest today, and for the future,” said Sahir Azam, chief product officer at MongoDB.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB And Lombard Odier To Enhance Core Banking Tech With Generative AI

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ: MDB) announced that Lombard Odier, a  Swiss private bank, partnered with MongoDB to enhance its banking tech systems.

In collaboration with MongoDB, Lombard Odier has streamlined the modernization of its systems and apps with generative AI, “reducing technical complexity and accelerating the bank’s innovation journey.”

The generative AI-assisted modernization initiative enabled Lombard Odier to:

  • Migrate code 50 to 60 times quicker than previous migrations
  • Move applications from legacy relational databases to MongoDB twenty times faster, leveraging generative AI
  • Automate repetitive tasks with AI tooling to accelerate the pace of innovation, reducing project times from days to hours

Delivering digital experiences to private and institutional customers while driving cost efficiencies is a “challenge across the banking industry.”

With the acceleration of digitization and the advent of AI, Lombard Odier is evolving its systems and “integrating technologies to give its clients the best possible service and experience.”

The bank’s GX Program—an initiative designed to modernize Lombard Odier’s banking application architecture to respond to market developments—launched in 2020 with the goal of enabling “innovation, reducing potential service disruption, and improving customer experiences.”

Building on its relationship with MongoDB, Lombard Odier chose MongoDB as the data platform for its “transformation initiative.”

The bank initially decided to “develop its portfolio management system (PMS) on MongoDB.”

The bank’s largest app, PMS manages shares, bonds, exchange-traded funds, and other financial instruments.

MongoDB’s ability to scale was key to this system migration, as this system is used to monitor investments, “make investment decisions, and generate portfolio statements.”

It is the engine that runs Lombard Odier’s online banking application “MyLO,” which is used by the bank’s customers.

The bank engaged with MongoDB to co-build a Modernization Factory—a service that helps customers “eliminate barriers like time, cost, and risk associated with legacy applications and eliminate technical debt that has accumulated over time—to expedite a secure and efficient modernization.”

MongoDB’s Modernization Factory team worked with Lombard Odier to create customizable generative AI tooling, including “scripts and prompts tailored for the bank’s unique tech stack, which accelerated the modernization process by automating integration testing and code generation for seamless deployment.”

Geoffroy De Ridder, Head of Technology and Operations at Lombard Odier said:

“We chose MongoDB because it offers us a cloud-agnostic database platform and an AI modernization approach, which helps to automate time-consuming tasks, accelerate the upgrade of existing applications, and migrate them at a faster rate than ever before. Having up to date technology has made a big impact on our employees and customers while proving to be fast, cost-effective, and reducing maintenance overheads.”

In addition to PMS, Lombard Odier modernized “multiple other applications from its existing Java application server to the bank’s next-generation framework.”

The bank went a step further and worked with MongoDB to use generative AI on a marketing application called “Publications” to accelerate the code migration.

The bank’s developers were able to use Modernization Factory gen AI based tooling and products “to feed into scenarios during regression testing and automatically generate new code much faster than before.”

Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries “by unleashing the power of software and data.”

MongoDB’s developer data platform is a database with an integrated set of related services that “allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience.”

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike Debuts High-Performance Distributed ACID Transactions – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Databases“><meta name="x-tns-authors" content="“>


Aerospike Debuts High-Performance Distributed ACID Transactions – The New Stack


<!– –>

As a JavaScript developer, what non-React tools do you use most often?

Angular

0%

Astro

0%

Svelte

0%

Vue.js

0%

Other

0%

I only use React

0%

I don’t use JavaScript

0%

2025-02-05 05:00:27

Aerospike Debuts High-Performance Distributed ACID Transactions

Latest database upgrades deliver consistency for high-performance OLTP workloads, with strict serializability for multirecord transactions.


Feb 5th, 2025 5:00am by


Featued image for: Aerospike Debuts High-Performance Distributed ACID Transactions

The traditional trade-off for distributed databases with high write speeds was availability for consistency. Version 8 of Aerospike’s performant multimodal database, which was unveiled Wednesday, helps dispel this notion by offering real-time distributed ACID transactional support at scale.

Already known for its high-performance online transactional processing (OLTP), Aerospike’s engine has been updated with key features that are ideal for ensuring consistency without sacrificing speed. In addition to providing distributed ACID transactions, version 8 guarantees strict serializability of those transactions.

There are also intuitive transaction APIs that allow for multiple operations within a transaction while simplifying the developer experience.

Supporting Consistency

According to Aerospike CTO Srini Srinivasan, the objective of the release is to “move, collectively, the field forward for having higher-performance databases which also support consistency. And, we try to minimize that compromise of performance and availability while you’re adding strong consistency.”

Aerospike’s ACID properties ensure transactions don’t interfere with each other while producing well-understood results. This point is critical to organizations in regulated spaces like finance, which process what Srinivasan estimated is up to hundreds of millions of transactions — each of which possibly contains multiple records — each second.

Such organizations are “using us for high performance, but they need to denormalize the data and put it in a single record,” Srinivasan said. “And, if they have a necessity to link multiple records together, while still keeping them separate for regulatory reasons, that requires you to implement proper transactions, which is what Aerospike 8 does.”

Most importantly, the updated engine shifts the onus of maintaining consistency from the application level to the database level, liberating developers from such vital concerns.

Consistency and Performance

Prior to unveiling Aerospike Database 8, Aerospike provided transactional consistency for single-record operations. The distributed ACID characteristics of the new version supply consistency for more sophisticated transactions. “When you add the multirecord ACID distributed transaction support, you can change multiple records within the same transaction,” Srinivasan explained. Moreover, developers can realize the atomicity, consistency, isolation and durability (ACID) benefits for respective transactions across distributed systems spanning clouds, data centers and geographic locations.

Atomicity ensures transactions either do or don’t happen. Isolation means other transactions don’t access the records a transaction is currently accessing. Durability means the system won’t lose the data. Most importantly, these boons are provided for high-performance applications. Aerospike’s “algorithms to provide consistency are crafted to provide higher availability than many other algorithms,” Srinivasan said. “That’s actually unique.”

Strict Serializability

The strict serializability of Aerospike Database 8’s distributed ACID transactions is also a key feature for developers. This property, which Srinivasan said guarantees the order of transactions are executed in the database in the order in which they occur, means addressing these issues isn’t part of the app-building process. If an organization is transferring funds from one bank account to another and withdrawing money from the latter in a series of operations, with strict serializability, “If a transaction finishes before another one starts, that is exactly how the database will execute it,” Srinivasan said.

Strict serializability means each new transaction accessing the database is updated with changes to the database made by previous transactions. Additionally, Aerospike’s strict serializability for multirecord transactions doesn’t compromise the performance of the single-record transaction support the database previously had. In fact, it achieves the former without “slowing down the single records,” Srinivasan commented.

Exonerating Applications and Developers

Aerospike Database 8’s new features transfer the burden of ensuring consistency from the applications relying on the database to the database itself. This development is meaningful for two reasons. Firstly, it results in more dependable applications, reliable uptime and better performance. According to Srinivasan, many algorithms designed to provide consistency in Aerospike can be implemented at the application level. “What that would mean is the applications would have to keep track of the state of every transaction they’re executing outside the database,” Srinivasan revealed. “And then, if the application server dies, then you lose state. So, it’s very, very hard to avoid data loss.”

Secondly, it’s difficult to identify bugs in distributed systems, which could create problems with the order in which transactions are executed. In addition to furnishing the aforementioned guarantees for consistency and the proper order of transactions, Aerospike supplies other tools to maintain consistency at the database level.

According to Srinivasan, resources such as Jepsen’s testing capabilities enable “a third-party application developer to check, ‘Hey, this database, does it work? Is it a proof for the algorithm?’ It makes it easy for application programmers. They don’t have to do all the hard work. They just write the apps and they can depend on these guarantees, and they can get verification that these are indeed being met.”

Transaction API Savviness

Aerospike Database 8 also contains a transaction API that’s useful for enabling complex transactions for OLTP systems. With the API, once a transaction begins, it’s possible to do a number of operations in it before the transaction end phase is reached. “At that point, you’re not guaranteed that the transaction will commit, because until that time somebody else might have interfered,” Srinivasan said. “But, that’s all done at the end-transaction phase. You basically put an envelope around all kinds of operations you’re doing on the database. That’s the API.”

Aerospike Database 8 also supports Spring to improve the developer experience of using this framework with the database. According to Srinivasan, “Application developers can just program in Spring, and then, underneath the covers, we provide a library which translates the Spring API’s application call into underlying API calls at the database level. The Spring developer doesn’t need to know the APIs of the Aerospike database.”

Moving the Field Forward

Many NoSQL databases started out prioritizing availability over consistency before gradually adding properties for the latter. Aerospike’s distinction is that it is a distributed, high-performance multimodal database (with support for vectors, key-value, graph formats and document formats) that enables consistency for sophisticated, multirecord transactions.

With its consistency guarantees, it allows developers to concentrate on building the best logic for their applications without compromising their productivity, or progress, by worrying about concerns that are now handled at the database level.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.







Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.