Month: December 2024
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Amazon DynamoDB, a serverless NoSQL database, has been a go-to solution for over one million customers to build low-latency and high-scale applications. As data grows, organizations are constantly seeking ways to extract valuable insights from operational data, which is often stored in DynamoDB. However, to make the most of this data in Amazon DynamoDB for analytics and machine learning (ML) use cases, customers often build custom data pipelines—a time-consuming infrastructure task that adds little unique value to their core business.
Starting today, you can use Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse to run analytics and ML workloads in just a few clicks without consuming your DynamoDB table capacity. Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data.
Zero-ETL is a set of integrations that eliminates or minimizes the need to build ETL data pipelines. This zero-ETL integration reduces the complexity of engineering efforts required to build and maintain data pipelines, benefiting users running analytics and ML workloads on operational data in Amazon DynamoDB without impacting production workflows.
Let’s get started
For the following demo, I need to set up zero-ETL integration for my data in Amazon DynamoDB with an Amazon Simple Storage Service data lake managed by Amazon SageMaker Lakehouse. Before setting up the zero-ETL integration, there are prerequisites to complete. If you want to learn more on how to set up, refer to this Amazon DynamoDB documentation page.
With all the prerequisites completed, I can get started with this integration. I navigate to the AWS Glue console and select Zero-ETL integrations under Data Integration and ETL. Then, I choose Create zero-ETL integration.
Here, I have options to select my data source. I choose Amazon DynamoDB and choose Next.
Next, I need to configure the source and target details. In the Source details section, I select my Amazon DynamoDB table. In the Target details section, I specify the S3 bucket that I’ve set up in the AWS Glue Data Catalog.
To set up this integration, I need an IAM role that grants AWS Glue the necessary permissions. For guidance on configuring IAM permissions, visit the Amazon DynamoDB documentation page. Also, if I haven’t configured a resource policy for my AWS Glue Data Catalog, I can select Fix it for me to automatically add the required resource policies.
Here, I have options to configure the output. Under Data partitioning, I can either use DynamoDB table keys for partitioning or specify custom partition keys. After completing the configuration, I choose Next.
Because I select the Fix it for me checkbox, I need to review the required changes and choose Continue before I can proceed to the next step.
On the next page, I have the flexibility to configure data encryption. I can use AWS Key Management Service (AWS KMS) or a custom encryption key. Then, I assign a name to the integration and choose Next.
On the last step, I need to review the configurations. When I’m happy, I choose Next to create the zero-ETL integration.
After the initial data ingestion completes, my zero-ETL integration will be ready for use. The completion time varies depending on the size of my source DynamoDB table.
If I navigate to Tables under Data Catalog in the left navigation panel, I can observe more details including Schema. Under the hood, this zero-ETL integration uses Apache Iceberg to transform related to data format and structure in my DynamoDB data into Amazon S3.
Lastly, I can tell that all my data is available in my S3 bucket.
This zero-ETL integration significantly reduces the complexity and operational burden of data movement, and I can therefore focus on extracting insights rather than managing pipelines.
Available now
This new zero-ETL capability is available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Hong Kong, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, Stockholm).
Explore how to streamline your data analytics workflows using Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse. Learn more how to get started on the Amazon DynamoDB documentation page.
Happy building!
— Donnie
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, the developer database, has added a new group of organisations to its AI development and services effort, the MongoDB AIApplications Program (MAAP) ecosystem.
MAAP was launched this summer, with founding members Accenture, Anthropic, Anyscale, Arcee AI, AWS, Cohere, Credal, Fireworks AI, Google Cloud, gravity9, LangChain, LlamaIndex, Microsoft Azure, Nomic, PeerIslands, Pureinsights, and Together AI. It offers customers an array of resources to put AI applications into production: reference architectures and an end-to-end technology stack that includes integrations with leading technology providers, professional services, and a unified support system to help customers quickly build and deploy AI applications.
Capgemini, Confluent, IBM, QuantumBlack, AI by McKinsey, and Unstructured have now all joined MAAP, giving enterprises additional integration and solution options. The MAAP Center of Excellence Team, a cross-functional group of AI experts at MongoDB, has collaborated with partners and customers across industries to overcome an array of technical challenges, empowering organisations to build and deploy AI applications.
MongoDB is also now collaborating with Meta on the Llama large language model platform to support developers in their efforts to build more efficiently and to best serve customers. Customers are leveraging Llama and MongoDB to build “innovative and AI-enriched applications”, said the partners.
“At the beginning of 2024, many organisations saw the immense potential of generative AI, but were struggling to take advantage of this new, rapidly evolving technology. And 2025 is sure to bring more change, and further innovation,” said Greg Maxson, senior director of AI GTM and strategic partnerships at MongoDB. “The aim of MAAP, and collaborations with industry leaders like Meta, is to empower customers to use their data to build custom AI applications in a scalable, cost-effective way.”
“Business leaders are increasingly recognising generative AI’s value as an accelerator for driving innovation and revenue growth. But the real opportunity lies in moving from ambition to action at scale. We are pleased to continue working with MongoDB to help deliver tangible value to clients and drive competitive advantage by leveraging a trustworthy data foundation, thereby enabling gen AI at scale,” said Niraj Parihar, CEO of insights and data global business line and member of the group executive committee at Capgemini. “MAAP helps clients build gen AI strategy, identify key use cases, and bring solutions to life, and we look forward to being a key part of this for many organisations.”
“We are pleased to see how many enterprises are leveraging our open source AI models to build better solutions for their customers and solve the problems their teams are facing every day,” added Ragavan Srinivasan, VP of product at Meta. “Leveraging our family of Meta models and the end-to-end technology stack offered by the MongoDB AI Applications Program demonstrates the incredible power of open source to drive innovation and collaboration across the industry.”
In another useful integration, for both database admins and application managers, Datadog, the monitoring and security platform for cloud applications, has announced its Database Monitoring product now observes MongoDB databases. Database Monitoring now supports the five most popular database types: MongoDB, Postgres, MySQL, SQL Server and Oracle.
Traditional monitoring tools typically only allow organisations to monitor either their databases or their applications. This can lead to slow and costly troubleshooting that results in frustration from database and application teams, extended downtime and a degraded customer experience.
Datadog Database Monitoring enables application developers and database administrators to troubleshoot and optimise inefficient queries across database environments. With it, teams can easily understand database load, pinpoint long-running and blocking queries, drill into precise execution details and optimise query performance to help prevent incidents and spiralling database costs.
“Replication failures or misconfigurations can result in significant downtime and data inconsistencies for companies, which may impact their application performance and reliability. That’s why maintaining high availability across clusters with multiple nodes and replicas is critical,” said Omri Sass, director of product management at Datadog. “With support for the top five database types in the industry, Database Monitoring gives teams complete visibility into their databases, queries and clusters so that they can maintain performant databases and tie them to the health of their applications and success of their businesses.”
Article originally posted on mongodb google news. Visit mongodb google news
Presentation: Production Comes First – An Outside-In Approach to Building Microservices
MMS • Martin Thwaites
Article originally posted on InfoQ. Visit InfoQ
Transcript
Thwaites: My name is Martin Thwaites. I’m first and foremost an observability evangelist. My approach to building systems is about how do we understand them, how do we understand them in production. I also work for a company called Honeycomb. A couple years before I worked for Honeycomb, I was contracted to a bank. I was brought in to start looking at a brand-new system for the bank: core banking systems, ledgers, all of that stuff, writing them from scratch, which is a task. We started to understand when we started building it, that the thing that was really important to them was about correctness at a system and service level. They didn’t really care about code quality. They didn’t care about database design. All they really cared about was the system was correct. What that came to was us trying to understand that actually the problem that we were trying to solve was that production returned the right data, and that was a little bit liberating.
Testing (Feedback Loops)
Some people may recognize this diagram. This was a 2020 article that Monzo Bank put out. That wasn’t the bank I was working with. This is a view of their microservices, all 1500 of them. When they put it out, it caused a little controversy. I think there was a lot of developers out in the world that looked at this with fear of, I struggle to keep the class that I’m developing in my head, never mind all 1500 microservices. Because I’ve been in this industry for a while, and that’s the system I started working on. It had a box, and it had a cylinder. There was a person, they hit my website, and that website loaded things from a database and put stuff back in a database, which was incredibly easy to reason about.
That was really hard for me to get my head around when we start thinking about large microservices systems, where we’ve got lots of independent systems that do independent little things. Trying to equate them to a time when I could write a test against my productions of my blue box, in this scenario, where it would test a couple classes, and I would feel a lot of confidence that my system is going to work, because I could run the entire thing locally. Can you imagine trying to run that on your machine? I’m pretty sure that an M3 Max Pro thing won’t do it either. You just can’t.
The issue is that now our systems look a little bit more like this. We have some services. Users may hit multiple services. You may have an API gateway in there. This is not meant to be a system that I would design. This is meant to be a little bit of a pattern that we’re going to talk about. We’ve got independent systems. Some of them are publicly accessible. Some of them aren’t. We’ve got some kind of message bus that allows us to be able to send things between different systems asynchronously. We’ve got synchronous communication, asynchronous communication, all those things in there. A little bit of a poll.
How many people identify as an application developer, engineer, somebody who writes code that their customers would use and interact with, as opposed to maybe a DevOps engineer or an SRE? This is a talk about testing. It’s really a talk about feedback loops, because testing is a feedback loop of how we understand our systems. If we were to take a developer, the fastest feedback loop that we have is a linter. It will tell us whether our code is formatted correctly, whether we’re using the right conventions. It will happen really fast. It happens as you type. You don’t really get much faster than that. You’re getting really rapid feedback to know whether you’re developing something that is correct.
The next time we have feedback is the compiler, if we’re running in a compiled language. We will compile all of those different classes together, and it will tell us, no, that method doesn’t exist on that class. If we’re using languages that do support compilation. Again, that’s really fast, depending on what language you’re using and depending on how bad the code is. It is a fast feedback loop. We’re up to seconds at this point. The next feedback loop that we have is developer tests, the test that you write as a developer against your application. I’m specifically not mentioning two types of testing that would fall into that bracket, because they cause controversy if you call one thing another thing, and another thing another thing. We’ll come on to what those are.
Then, one, we’ve got our developer tests. We’re talking now maybe multiple seconds, maybe tens of seconds, depending on how big your tests are, how many tests you have that are running, how fast those tests run. We’re still local machine here. We’re still able to get some really rapid feedback on whether our system is doing supposedly what we’ve been asking it to do. Because, let’s face it, computers only do what we ask them to do. There aren’t bugs. There are just things that you implemented incorrectly, because computers don’t have a mind of their own yet.
The next thing we have is, I’ll call them end-to-end tests. There’s lots of different words that you might use for these, but where we test a user flow across multiple different services or systems to ensure that things are running properly. This is where we add a lot of time. These are tests that can take minutes and hours to run, but we’re still getting feedback in a fast cycle. From there, we have production telemetry. I told you I’ll mention observability.
We have production telemetry, which is telling us how that system is being used in production, whether there are errors, whether things are running slow. Again, this is a longer feedback loop. Each of these goes longer. Then, finally, we have customer complaints, because that is a feedback loop really. Customers are very good at telling you when things don’t go right, they just don’t do it fast enough for me. We’ve got all of these feedback loops, but they all take longer. Each different one of these is going to take longer. It’s not something you can optimize. It is something that’s just going to take longer.
Now we get into something that’s a little bit more controversial when we talk about developer tests. A lot of people would see developer tests as methods, testing each individual method that we’ve got in our system. We might call these unit tests, coded tests. There’s lots of different words that you might use for them. Beyond that, we go a little bit further out, and we have classics, an amalgamation of multiple different methods. There are some people who call these integration tests. I do not agree, because integration test is not a thing. It’s a category of a thing. If we go even a bit further out than that, we’ve then got things like controllers and handlers, if we’re using messaging contracts or CQRS, that will bring together multiple classes and test them together as a bit of functionality. Then we can go even a further step beyond there.
Then we’ve got API endpoints, messages, events, the external interactions of our system. Essentially, the connection points into our system. Because, ultimately, on the outside there, that’s the only thing that matters. We can write tests at the method level. I would imagine that all those people who put their hand up before have likely got a class inside of their code base, or a method inside of their code space that has way more unit tests than it needs. They’ve spent hours building those tests. If you go and talk to them about it, it is the pinnacle of developer excellence, this class. With all of these unit tests, it’s never going to fail. I come from a world where we’ve got things like dependency injection. What will happen is, you’ll do that class, you’ll make it amazing.
Then you’ll try and run the application and realize, I didn’t add that to dependency injection, so nothing works now. I normally ask the question of how many people have deployed an application and realized they hadn’t added it to dependency injection. Now I just assume that everybody has, because everybody puts their hand up, because we’ve all done it. Because the reality is, when we’re writing tests at that level, we’re not really testing what’s useful. It’s my favorite meme that exists. I’m sure both of those drawers worked, but as soon as we try and pull them together, they don’t. It doesn’t matter how many tests you write against your methods and your classes and even your handlers and your controllers, because the only thing that matters is those connection points at the top. No CEO is going to keep you in your job when you say, all of my unit tests passed. I didn’t mean us to lose £4 million.
Getting back to our system, if we look at this example system. We’ve got three services that a customer can hit. We’ve got one service which is essentially fully asynchronous. It has an internal API call. The red lines on here are API calls, the blue lines are messages, not wholly important for what we’re doing. The thing that’s important on this diagram is these things. These are our connection points. I’ll say it again, these are the only things that matter. Inside those blue boxes, nobody cares. Nobody cares if you are writing 400 classes, one class. Nobody cares that you’ve got an interface for every single class that you’ve created. Nobody cares about your factories of factories that deliver factories that create factories. Because, ultimately, unless those connection points that we’re looking at here work independently and together, nobody will care.
Small little anecdote from the bank, we based everything based on requirements. If we got a requirement for something, we’d write the tests and we’d implement them. We get requirements like this, for instance, the product services’ list products endpoint should not return products that have no stock. Very valid requirements. Because it’s a valid requirement, it’s something that you could write a test for. In this diagram, that’s the only two things that matter. On that service, those two connection points are the only two things that matter. You could spend hours building classes internally, or you could spend 20 minutes building one class that does it. You could spend ages building unit tests around that class or those classes, but unless those two endpoints work, you haven’t done the work.
Then there’s another system that is something that we’re going to receive, because the warehouse service should emit messages that contain the new stock level for a product, whenever an order is placed. Again, we’ve got some contracts. These are two independent services. If we have to build them, deploy them together, they’re not microservices. They are independent services. They emit messages. They receive messages. They take API calls, they emit messages. They’re to be treated in isolation, because those are the contracts that we’ve agreed to. Those contracts can’t change, because if they change, the consumers need to change.
Test Driven Development (TDD)
The thing is, ultimately, this is actually a talk about TDD. It’s not a talk about testing in general. It’s actually talking about the TDD workflow, because TDD is not about unit tests. It’s not about testing methods. It’s not about testing classes. It’s not about whether you use JUnit or xUnit, or whatever it is that you’re using for it. That isn’t what makes it a unit test. That isn’t what makes it TDD. It might make it a unit test depending on how you define a unit, which between the U.S. and America, units are different, apparently. Ultimately, TDD is about a workflow. We write a test. We write a test that fails. We write an implementation for that test that makes it pass. Then we refactor, and then we just continue that cycle. If we go back to what I talked about with requirements, ultimately, we’re getting requirements from the business or even from internal architects, or we’ve designed them ourselves to say, this is the contract of our system. What we’ve done is we’ve generated a requirement, which means we can write a test for that requirement.
At the bank, what was really interesting was we got really pedantic about it, which is what I love about it, because I’m a really pedantic person, really. We had this situation for around six months, I think it was. We went live, obviously. We had a situation for about six months where you could spend more than your balance, which apparently is a bad thing in banks. We had the BA come to us and say, “There’s a bug in the platform”. He said, “You can spend more than you balance”. I’m like, “I’m so sorry. Can you point me to the requirement so I can put it on the bug report?” He was like, “There wasn’t a requirement for it. We just assumed that you’d build that”.
The reality was, because there was no requirement for it, we hadn’t built a test. We hadn’t built any tests around that. We’d built something because they said we need to get the balance. We’d built something because they said we need to be able to post a transaction. We hadn’t built anything that said, throw an exception, throw a 500 error, throw a 409 error, throw some kind of error when they spend more than their balance. What that meant was they had to tell us exactly what error message should be displayed under what conditions, and we had to validate that that error message was expressed in those conditions. I’d like to say eventually they came on board, they didn’t. It was a constant battle.
Ultimately, they got this idea that they have to define the requirements, not the implementation. We got them coming a few times, saying, so we need you to add this to the database. Like, we don’t have a database. Like, but I need it in the database. We use event store. It’s not a database. Need to update the balance column, no, stop talking about implementation details, tell me what you need it to do, and we will build it. Which is very important when we talk about legacy systems, because legacy systems, and the way that legacy systems were developed was by that kind of low-level detail. If we talk about working in a TDD way from the outside, where people define requirements and we write tests for those requirements, and then we write the implementation that makes those requirements work.
What’s really cool about that is we don’t develop code that isn’t used. We don’t develop a test or 500 tests that test that a class is bulletproof, where none of those conditions can ever be met. We end up with a leaner code base, which, a leaner code base is easiest to support. It also means you can get 100% code coverage. Because the reality is, if that code isn’t hit by any of your outside-in tests, your API level tests, then one of two things has happened, you’ve either missed a requirement or that code can never be hit. What I’m saying here is not something that I would like anybody to go away and cargo call it and just say, “Martin said that I should just do this, therefore, if I lose my job, I can sue Martin”. No, it’s not the way this thing works.
These are just different gears. We call these gears. We’ve got low level gears, unit tests, methods, classes that allow us to maneuver, but you don’t get very far fast, because you’ve got to do more work. On the outside when we’re testing our APIs, we can run really fast. We can test a lot of detail really fast. Those are your higher gears. It’s about switching gears when you need to. Don’t feel like you need to write tests that aren’t important.
The Warehouse System
If we take, for instance, the warehouse system, essentially what we’re trying to do here is we’re going to write tests that work at this level. We’re going to write tests where what we do is we send in an order placed message. Before that, we’re going to send in some stock received messages. Because there’s a requirement I’ve not told you that was part of the system, which is, when we see, received a stock received message, that’s when we increase the stock levels. Maybe we don’t do an output at that point. Each one of these would be a requirement that would require us to write another test.
If somebody came to me with this requirement of saying, put in the order placed message and send out the new stock, I’d say it’ll fail at the moment because there’s no stock. What’s the requirement? When I get new stock, I need to increase the stock. We start to build up this idea of requirements. We get traceability by somebody telling us what our system is supposed to do and why it’s supposed to do it, which means we build up documentation. I know everybody loves writing their own documentation against your classes and all of that stuff, writing those big Confluence documents. Everybody loves doing that? That’s pretty much my experience.
Ultimately, when we’re building these things from requirements, the requirements become our documentation. We know who requested it, why they requested it, ultimately, what they were asking it to do, not what it’s doing. When we place these two things in, we then check to make sure that we’ve received a stock received message. That’s all that test should do. A lot of people would call this integration testing. I don’t agree. I call these developer tests. The reason I call them developer tests is because we write these things in memory. We write these things against an API that we run locally against our service.
The developers write these, not some external company, not some person who you’ve just hired to go and sit on a desk, not some intern. The person who’s writing this code, the implementation, has first wrote these tests that will fail. Then once they’ve failed, they’ll write the implementation to make them pass. It’s that simple. If there’s a requirement that you think is something that should be built, you go back to the business and say, “I think this should be a requirement”. “Yes, I agree. What’s the requirement? Write the requirement. Now go and write the test”. You could call this RTDD, Requirements Test Driven Development, maybe.
Ultimately, what we’re doing here is we’re testing to make sure that the things that happen in production produce the right outcomes that should happen in production. One of the things I see as an antipattern when people start doing this is they do this. They’ll go and inject a load of products into the SQL Server, into the database, into whatever it is that you’re using as your datastore, as part of their test setup, as part of preceded data. I’m going to take a wild guess that that’s not the way it happens in production, that when you get stock, somebody doesn’t craft some SQL queries and increase the stock.
Ultimately, that’s not what you’re expecting to happen. What you’re expecting to happen is you might expect something to happen where the database already has some data. That’s not how it’s going to happen in production. You get into this step of your tests being tightly coupled to your database schema when they shouldn’t be. Because if you have to change your test when you write some new code, you can no longer trust that test. You don’t know whether that test is right. It was right, but now you’ve changed something in that test. Is it still right? Yes, because I don’t write bad code. Unfortunately, that’s not something that people believe. Ultimately, what we’re trying to do here is mimic what a production system does in every single test that we do.
We had, when we built this, around 8000 of these tests per service, and they ran in under 10 seconds. For context, we run in .NET when we did this. .NET has something called a WebApplicationFactory, which runs up our entire API in memory, so there is no network calls, because as soon as you add a network call, you run in at least a millisecond. You need to think about how you test this. It’s not just, let’s run some test containers and run some Newman tests against these test containers. That’ll do it. That isn’t how we do it. You have to think about it. You have to spend the time to build the frameworks that allow you to test this way.
Ultimately, when you have those 8000 tests, you have a ton of confidence. It is a journey. We had a team that had built this and then decided that their entire internal implementation was wrong when a new requirement came in. The database design that they’d come up with just didn’t work for this new thing. They rewrote everything, the entire blue box, rewrote all of it. Contracts were the same on the outside, because we’re still going to get an order placed message. We’re still going to get stock received messages, and we still need to output the stock changed messages. They did that, and then all the tests passed. They spent two days procrastinating because they didn’t feel confident, because they’d rewrote the entire internal implementation, but the test still passed.
I think that goes to show that when we’re writing and we’re writing so much code, we don’t have confidence that we can deploy things, because what we’re doing is we’re designing our tests. We’re designing our confidence tests around our local environment or at too lower level. We’re not going to the outside. We’re not testing what production wants. We’re not testing what the business wants. We’re testing the code that we wrote. They eventually deployed it. It all worked. It was fine. I think that really goes to show that you should really think about these tests. Once you get that confidence, you can run really fast with a lot of confidence. This is how you get to these phases of people who are deploying per commit. Because if your pipeline can run all of these tests, and you can have 100% confidence that every requirement that you had to achieve before is achieved now, just deploy it. Your new stuff might not be right, but put that behind the feature flag. As long as all the old stuff still works, just deploy it, test in production.
Observability
Ultimately, though, there are things that we can’t test from the outside, because from the outside, some of the things that we do may look the same. We may say, do a thing, and the outcome may be the same, but the internals of what we expected might be different. We can’t do that from an outside-in testing perspective, but there is another way, because this is actually a talk about observability. What do we mean by observability? This is a quote from our founder. Observability actually comes from something called control theory from the 1960s, it’s not new. It’s not something that the logging and metrics vendors came up with about seven years ago. It’s something that’s ancient, because there is nothing new in IT, apparently. Ultimately, observability is about understanding and debugging unknown-unknowns.
The ability to understand the inner system state just from asking questions from the outside. That’s the software definition based on Cummings paper on controllability. I want to focus on one part of that, which is the ability to understand the inner system state, any inner system state. Because I said there’s things that we can’t test from the outside, but are important for us to know that they went through the right approaches. When we were writing things for production, we really need to know about how our software is going to work in a production environment. That’s really important to us. That’s where we do things like tracing, and we get things like this, which is a basic trace waterfall.
Ultimately, the business only cares about that top line, the outside. They care about how long it takes. They care about it returned the correct data. You as an engineer, someone who’s supporting that system, may care about either the whole waterfall or just some individual sections of that waterfall. You may just care about your service, your checkout service, maybe that’s the one that you’re looking after, maybe the cart service, maybe you’re looking after two of those services. This is what allows us to see inner system state, and this is what we see when we’re in production.
What if we could use that locally? We call this something called observability driven development, which is how we look at the outside of a big service, the big system. How do we use that information to help drive our development flows, to help drive more requirements, and therefore more tests and more implementations and more refactoring? I’ve got an example. Don’t try and work out the language. I specifically talk about six different languages, try and munge together the syntax so that nobody thinks that I’m favoriting one particular language. There may be a test afterwards to spot all the different design patterns. Let’s say we’ve got a GetPrice for a product, and we use pricing strategies based on the product ID. Maybe the product ID has a different pricing strategy on there.
Those pricing strategies are something that we use to calculate the new price of a product. If two of those pricing strategies somehow converge, maybe one of them is a 10% markup, and one of them is a £1 markup. If my product is £10, they’re both going to come out with the same outcome. From an outside, I don’t know whether that is the right pricing strategy that has been used. Ultimately, it’s not just something I want to know in my tests. It’s something I want to know about my system when it’s running as well, because if it’s important enough for you to know whether it’s correct in your tests, and if we’re writing tests from the outside to understand our system, it’s important enough for you to give to either the people supporting your system or yourself, if that’s you supporting that system.
Ultimately, I’d like to think that everybody treats the people supporting their system as if they were themselves anyway. We’re not in a world anymore where we throw it over the wall and somebody else can sort that. Like, “I’ve paid my dues. I’m going home, I’m going to the pub”. We’re not in that world anymore. We need to be kind to ourselves about how we understand how the systems work in production.
We can use those same concepts in our local development because, let’s say we were writing a test to say let’s make sure our GetProduct endpoint, when it’s got a valid product, uses the right strategy. How would we do that? How would we call our ecomService API from the outside our GetProduct? Imagine this is a wrapper around our API that we’ve built inside of our test. How do we test to make sure it’s using the right strategy? This is where we use either something called tracing, or we use logs, or we use some kind of telemetry data that should be emitted in production to test locally. I use tracing. I think it’s superior, but doesn’t mean you have to. Logs are just as useful in this circumstance. What we can do is we can say, let’s get the current span that’s running, and then let’s set our product strategy ID on our span.
What that means is, when we go and then write the test, now we can go and say, go and work out what spans were emitted for this particular test. Then make sure that the strategy ID has come out, and then make sure it’s using the right strategy ID. It sounds simple, and that’s because it is. It does however take work. This is not something where you’re just going to be able to take it out the box, open up a C# file, or a Java file, and just write something in. What we found when we were writing things at the bank and the new people that we brought on, is this was a big knee jerk reaction to people going, no, that’s not the way we build things in finance. It’s like there’s a reason for that, because we’ve built things better when we’re not in finance.
Ultimately, there’s a lot of legacy that comes in with big bank finance systems, that kind of stuff. These are new patterns, but they’re different strings to your bow. They’re not something that can be used in isolation. You’re not going to get everything by doing these things, but you can do a lot of things.
When you do things like these, actually, when you’re writing your tests and you’re running your tests, you can actually see what that tracing data is going to look like, what the log data is going to look like if you in your local environments push to a system like, Microsoft have just released Aspire for the .NET people, that allows you to push stuff and see all of your telemetry data locally. There’s Jaeger, there’s OpenSearch, there’s lots of things that you can use, but you’re essentially now being able to see that data. The other thing that we found when we started writing software like this, very few people actually run the software. Very few people actually run it locally.
They just run the tests, because the tests were actually doing what they would do themselves. They don’t have to craft a message, stick it on service bus, stick it in SQS. I just put it in the test, and the test tells me it works. If you imagine how much time you spend hitting play on your IDE, letting that thing run, go into your Swagger doc, putting in the information, and hitting play. Or even just pressing play and hitting a HTTP doc and hitting the send request on those. If you could get around all of that, and you could test all of them all the time, how much more efficient would you be? Because what you’ve done is you’ve built that system so it works in production, not locally, not your local classes, not your local methods. You’ve built it so it works in production. This talk is actually about writing fewer tests, because you can write fewer tests on the outside that test more things than you can on the inside by writing unit tests, by writing those method level tests.
Key Takeaways
I want to leave you with a couple of thoughts. The first one is, don’t cargo call this. Don’t take this away and say this is the thing that we’re going to do now, we’re not going to do anything else. Think about which gear you’re in. Think about whether you need to move down gears. Don’t just move down gears, move up them as well. Can I write a test at the controller level? Can I write a test at the API level? Think about what extra things you’re testing as you go to the outside. As an example, if you’re at an API level and you test things, you’re actually testing serialization and deserialization. Because, how many times have we seen somebody changing the casing of a JSON object? Think about what gear you’re in. Think about what you’re trying to test. Think about what your outcomes are that you’re looking for.
Think about whether this is the right gear for the test that you need to write. When you’re doing that, write the tests that matter. It does not matter that you have a test that sets a property on a class and make sure you can get that property back. I can see a lot of people going, I’ve got loads of those. They’re not that useful. Don’t make your applications brittle. The more tests we write against methods and classes, the more brittle our application framework becomes, because as soon as we try to make a change, we’ve got to change tests. Those tests cascade. Finally, think about whether you’re covering your requirements, the requirements that the business have put on you. Think about that, because that’s the most important thing. The requirements are what matter in production. If you’re not testing the requirements, you’re not building for production.
Questions and Answers
Participant 1: How do we know that these techniques will actually ensure that our applications are easier to debug, and then they’re primed for use in production.
Thwaites: I think it’s really about the observability side, because if you’ve been able to debug something locally, and you’ve been using the observability techniques to do that, you’ve essentially been debugging your production system anyway when you’ve been writing the software. Because while you’ve been debugging when a message comes in and when a message goes out, that’s what you’ve been debugging, and that’s all that’s happening in production. That’s the difference, because if you debug a unit test locally, you’re debugging a unit test, you’re not debugging an application, you’re not debugging a service. You’re not debugging a requirement or something that would happen in production.
Participant 2: If you have a use case or a requirement that also involves a different service doing its thing, how would you test that? If the answer is smoking that service, how would you handle changes to that service’s requirements?
Thwaites: If we’ve got multiple services that are interacting together to achieve something, how do we test that scenario?
Ultimately, you shouldn’t care about that other service. That’s that service’s job to maintain its contract. Each of those connection points that we’re talking about is a contract that you’ve delivered to somebody. You can’t change that contract, because that will break things. You might own both services, but we work at a service level. Your tests in your service should make sure that that contract never changes, or, more specifically, never unknowingly changes. You might want to change it, but as soon as your tests fail on those contracts, you know that your consumer is going to fail. If you’ve got to change your tests, you’re going to have to tell consumers that they need to change their implementation. You shouldn’t care about what other service is doing. That’s that service’s job. What you should know about is, I know the contract that that service is going to give me. Let’s take an API contract.
The example I gave before was the checkout service is going to check the stock live via an API call into the warehouse service. In that scenario, you know what the contract of the warehouse service is going to be for that call, or you should put a mock or a stub, more specifically, into that service that says, when I call for these specific details, I expect you to give me this specific response. Because that’s the pre-built contract that you’ve agreed with that service. If that service changes, that’s on that service, that service is now broken, but it’s that service’s job to do the same thing. If you want to do that, then you’ve got to do that with every single one of them. I can tell you that 1500 microservices locally is not the thing you want to do.
You’ve got to take each of these services individually. You’ve got to make sure that everybody buys into this approach, that we are not going to change our contracts, or that we are going to support that contract. If you do that as an organization, not only do you get the confidence that internal consumers are not going to break. You can also use this for external consumers as well. Because we had a discussion in the unconference about multiple different external consumers that you don’t control, as soon as it’s public, you’re going to have to support it. API contracts are for life, not just a major release.
See more presentations with transcripts
MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ
Physical Intelligence recently announced π0 (pi-zero), a general-purpose AI foundation model for robots. Pi-zero is based on a pre-trained vision-language model (VLM) and outperforms other baseline models in evaluations on five robot tasks.
Pi-zero is based on the PaliGemma VLM, which was then further trained on a custom dataset collected from 7 different robots performing 68 tasks as well as the Open X-Embodiment dataset. The resulting base model can accept natural language commands and perform tasks “at rudimentary proficiency.” The Physical Intelligence researchers compared pi-zero’s performance to two baseline models, OpenVLA and Octo, on five different tasks, including folding laundry and bussing a table; pi-zero achieved “large improvements” over the baselines. According to Physical Intelligence:
The frontiers of robot foundation model research include long-horizon reasoning and planning, autonomous self-improvement, robustness, and safety. We expect that the coming year will see major advances along all of these directions, but the initial results paint a promising picture for the future of robot foundation models: highly capable generalist policies that inherit semantic understanding from Internet-scale pretraining, incorporate data from many different tasks and robot platforms, and enable unprecedented dexterity and physical capability.
Pi-zero’s architecture is inspired by Transfusion, a model created by Meta and Waymo that operates on tokens representing both discrete and continuous data. In the case of pi-zero, the model has a distinct module that handles robot-specific actions I/O, which the researchers call the “action expert.” The model’s input is a combination of vision images, the robot’s joint angles, and a language command; the output is a sequence of robot action tokens.
For some complex tasks, the human operator’s language command was first fed into a high-level VLM which decomposed it into a sequence of simpler tasks, as done by models like SayCan. The researchers found that this scheme improved performance on tasks such as setting a table. They also found similar improvement when the human operator gave the robot a sequence of simpler commands.
Physical Intelligence co-founder Karol Hausman answered several questions about the model on X. He confirmed that their demo video was not scripted or teleoperated. When asked why his team used folding laundry for evaluating their model, he said:
There are…many reasons why laundry folding is a good task:
– everyone understands if it’s done well
– it’s easy to reset (throw the clothes back in the basket)
– it can be arbitrarily long (multiple items in a row)
– it’s easy to generate diverse data (many clothing items)
Andrew Ng’s The Batch newsletter discussed pi-zero, saying:
One of the team members compared π0 to GPT-1 for robotics — an inkling of things to come. Although there are significant differences between text data (which is available in large quantities) and robot data (which is hard to get and varies per robot), it looks like a new era of large robotics foundation models is dawning.
Several other large players have been developing multimodal foundation models for robotics. Earlier this year, InfoQ covered NVIDIA’s GR00T model, which is trained on video, text, and real robot demonstrations. Last year, InfoQ covered Google’s PaLM-E, a combination of their PaLM and Vision Transformer (ViT) models designed for controlling robots, and Google DeepMind’s Robotics Transformer 2 (RT-2) a vision-language-action (VLA) AI model for controlling robots.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
-
As a community driven platform, STAN stands at the intersection of the 4 pillars of the gaming ecosystem, the gamers, gaming KOLs, game developers and advertisers
-
The platform boasts a strong community of over 20 million users, 350k KOLs, 250+ developers and advertisers and helps them to seamlessly connect with each other
-
Additionally, to bolster the growth of gaming industry, STAN is looking to launch an accelerator program for game developers to provide support and mentorship
STAN, India’s leading gaming community platform, has emerged as the top #1 game project on the Aptos platform network. The platform has witnessed remarkable growth and achievements in recent months and has onboarded an impressive 2.5 million users in the last six months alone. This significant milestone reflects the platform’s community first approach and cutting-edge technology.
With a total user base of 20 million, STAN is not only growing rapidly but also consistently solving inefficiencies of the gaming ecosystem, mainly the disconnect between the 4 pillars, the gamers, gaming KOLs, game developers and advertisers. STAN stands at the intersection of these stakeholders attracting a diverse and engaged audience and building thriving communities. These communities function as dynamic hubs, where users can engage, participate in tournaments and develop deeper connections with creators and other gamers.
Understanding the primary need of the stakeholders, STAN focuses on the RII (Relationship, Identity and Incentives) framework where these stakeholders are brought under one roof where each can strengthen the rest. The platform provides the gamers with true data ownership, giving them an identity in the decentralised space and allowing them to earn tangible rewards for the time and effort spent on gaming. This not only fosters satisfaction but also loyalty, inducing a flywheel effect in the gaming ecosystem. Additionally, for users facing onboarding complexities, STAN leverages its social-login driven system to ease the process. It provides a simple, step-by-step setup, ensuring even first-time users can connect their wallets effortlessly and dive into games quickly. Developers are hand-held and provided dedicated technical assistance, making it easy to integrate their games into STAN’s ecosystem. The partnership with Aptos helps STAN ensure that users’ gaming records and achievements are securely stored on the blockchain, and users are able to seamlessly transact on-chain at breakneck speeds and the best gas spends. This dual-sided approach removes traditional barriers, allowing gamers to focus on enjoying immersive experiences and enabling developers to bring their games to market without worrying about blockchain complexities.
Speaking at the developments, Mr. Parth Chadha, Co Founder and CEO at STAN said, “At STAN, we’re connecting gamers, creators, and developers like never before, and becoming the #1 gaming platform on Aptos is just the beginning. With our accelerator program, we’re paving the way for innovation and sustainable growth in gaming.”
Notably, STAN has collaborated with leading Web3 and Web2 games like Crypto Bingo, Grapes Bingo, Litcraft, and Fanplay, among others, and platforms like Okto and CoinDCX to bring innovative gaming experiences to millions of users. These partnerships exemplify STAN’s ability to connect developers with a highly engaged audience while addressing inefficiencies in retention and monetization. By leveraging its community-first approach, STAN supports game developers with targeted promotion through its extensive network of creators and advertisers, significantly reducing user acquisition costs. These collaborations also serve as proof of STAN’s mission to unite all gaming ecosystem stakeholders under one platform, enabling sustainable growth for both games and their communities.
In its quest to energize the gaming ecosystem exponentially, STAN is set to launch an accelerator program aimed to empower game developers by addressing inefficiencies such as high user acquisition costs, low ARPU, low retention rates, and ultimately inferior LTV/CAC. By providing investment support, mentorship, and access to STAN’s network of 20 million users, 350k KOLs, 250+ developers and advertisers, the initiative seeks to enable developers to scale their games sustainably.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB has announced a significant expansion of its MongoDB AI Applications Program (MAAP) at AWS re:Invent.
The initiative, launched earlier this year, has added global technology leaders Capgemini, Confluent, IBM, QuantumBlack, AI by McKinsey, and Unstructured to its ecosystem.
These partnerships aim to enhance the development and deployment of AI-powered solutions by providing organizations with advanced tools and integration options.
MongoDB has also partnered with Meta, focusing on integrating Meta’s AI models, including Llama, into the MAAP technology stack. The collaboration enables developers to harness Meta’s open-source tools alongside MongoDB’s capabilities, accelerating the delivery of AI applications. Future updates will include seamless mapping from MongoDB databases to LlamaStack APIs, simplifying workflows for developers.
“By joining the MAAP partner network, organizations like Capgemini and IBM bring expertise to help customers navigate the evolving AI landscape,” said Greg Maxson, Senior Director of AI GTM and Strategic Partnerships at MongoDB.
MAAP AI use cases
The MAAP initiative has already facilitated impactful AI use cases:
- CentralReach, a leader in autism and developmental disabilities care, utilized MAAP to optimize its platform by integrating over 4 billion clinical and financial data points. This advancement supports value-based care and clinical efficacy.
- IndiaDataHub, a market data analytics platform, leveraged MAAP and Meta’s AI tools to enhance sentiment analysis and data connectivity.
“Working with MongoDB and Meta has accelerated our AI strategy, helping us deliver timely, high-quality data analytics,” said Pranoti Deshmukh, CTO of IndiaDataHub.
Technical Innovations
The program’s expansion follows the recent introduction of vector quantization in MongoDB Atlas Vector Search. This feature reduces memory and storage requirements while maintaining performance, allowing developers to build scalable AI applications at reduced costs.
MongoDB’s partnerships with over 40 AI companies, including Astronomer and Arize AI, further support a diverse and interoperable ecosystem, enabling businesses to create tailored AI solutions.
Strengthening AI Ecosystems
Through its MAAP Center of Excellence, MongoDB aims to address the challenges in AI model deployment, retrieval techniques, and workflow optimization for over 150 organizations. This cross-functional collaboration underpins the company’s goal of empowering businesses with cutting-edge AI capabilities.
By expanding MAAP and aligning with industry leaders, MongoDB aims to redefine how organizations build and deploy AI-driven applications, ensuring they stay competitive in the rapidly evolving AI landscape.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
AWS has introduced the general availability of Lambda SnapStart for Python and .NET functions, a feature designed to improve the startup performance of serverless applications significantly.
Earlier, the company introduced Lambda SnapStart for Java functions to reduce cold starts. With the SnapStart for Python and .NET functions, this is now also applied for functions written in Python, C#, F#, and Powershell.
Lambda SnapStart optimizes function cold-start latency by initializing environments ahead of time and caching their memory and disk states. This cached environment is then used to resume execution, minimizing delays often caused by cold starts. Channy Yun, a principal developer advocate for AWS cloud, writes:
When you invoke the function version for the first time, and as the invocations scale up, Lambda resumes new execution environments from the cached snapshot instead of initializing them from scratch, improving startup latency.
(Source: AWS News Blog Post)
Marc Brooker, VP/Distinguished Engineer at Amazon Web Services, explains in a LinkedIn blog post:
Each Lambda function runs in one or more Firecracker-based MicroVMs, and each MicroVM has some associated state: memory, device state, CPU registers, and the like. A “snapshot” is when we tell Firecracker to store this state – writing down the memory and other state to a file on disk. This snapshot can be restored on the same physical machine, or a different machine with the same hardware configuration. Restoring is a simple matter of copying that state back into memory, back into the devices, and back into the CPU, then telling the (virtual) CPU that it can go ahead and start running.
Developers can use the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs to activate, update, and delete SnapStart for Python and .NET functions. They can activate Lambda functions using Python 3.12 and higher and .NET 8 and higher managed runtimes.
Yan Cui, an AWS Serverless Hero, tweeted:
Wow, SnapStart is now available for Python and .Net functions.
Interesting they didn’t do it for Node, I guess it’s not about popularity, so must be something about Node that doesn’t work well with SnapStart.
Currently, AWS Lambda SnapStart for Python and .NET functions are available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) AWS regions.
Lastly, with Python and .NET managed runtimes, SnapStart charges include the caching cost per published function version and restoration costs for each instance. The company recommends deleting unused function versions to lower SnapStart cache costs. Lambda’s pricing details are available on the pricing page.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Zurcher Kantonalbank Zurich Cantonalbank raised its holdings in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 23.3% during the third quarter, according to the company in its most recent 13F filing with the Securities and Exchange Commission. The firm owned 15,143 shares of the company’s stock after purchasing an additional 2,858 shares during the period. Zurcher Kantonalbank Zurich Cantonalbank’s holdings in MongoDB were worth $4,094,000 as of its most recent filing with the Securities and Exchange Commission.
Other hedge funds also recently modified their holdings of the company. Jennison Associates LLC increased its stake in MongoDB by 23.6% in the third quarter. Jennison Associates LLC now owns 3,102,024 shares of the company’s stock valued at $838,632,000 after purchasing an additional 592,038 shares in the last quarter. Swedbank AB raised its stake in MongoDB by 156.3% during the 2nd quarter. Swedbank AB now owns 656,993 shares of the company’s stock worth $164,222,000 after buying an additional 400,705 shares during the period. Westfield Capital Management Co. LP lifted its holdings in MongoDB by 1.5% during the third quarter. Westfield Capital Management Co. LP now owns 496,248 shares of the company’s stock worth $134,161,000 after acquiring an additional 7,526 shares in the last quarter. Thrivent Financial for Lutherans grew its stake in MongoDB by 1,098.1% in the second quarter. Thrivent Financial for Lutherans now owns 424,402 shares of the company’s stock valued at $106,084,000 after acquiring an additional 388,979 shares during the period. Finally, Blair William & Co. IL increased its holdings in shares of MongoDB by 16.4% in the second quarter. Blair William & Co. IL now owns 315,830 shares of the company’s stock worth $78,945,000 after acquiring an additional 44,608 shares in the last quarter. Institutional investors own 89.29% of the company’s stock.
Insider Buying and Selling
In related news, CRO Cedric Pech sold 302 shares of the business’s stock in a transaction on Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total transaction of $77,387.50. Following the sale, the executive now owns 33,440 shares in the company, valued at $8,569,000. The trade was a 0.90 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which is available at this hyperlink. Also, CAO Thomas Bull sold 1,000 shares of MongoDB stock in a transaction dated Monday, September 9th. The shares were sold at an average price of $282.89, for a total value of $282,890.00. Following the sale, the chief accounting officer now directly owns 16,222 shares in the company, valued at $4,589,041.58. The trade was a 5.81 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 23,600 shares of company stock worth $6,569,819. Corporate insiders own 3.60% of the company’s stock.
MongoDB Price Performance
Shares of NASDAQ MDB traded down $1.14 during mid-day trading on Tuesday, reaching $324.01. The company had a trading volume of 917,469 shares, compared to its average volume of 1,447,469. The business’s 50-day simple moving average is $284.45 and its 200 day simple moving average is $270.10. MongoDB, Inc. has a one year low of $212.74 and a one year high of $509.62. The company has a quick ratio of 5.03, a current ratio of 5.03 and a debt-to-equity ratio of 0.84.
Wall Street Analysts Forecast Growth
MDB has been the topic of several recent analyst reports. Oppenheimer lifted their price objective on MongoDB from $300.00 to $350.00 and gave the company an “outperform” rating in a report on Friday, August 30th. Wells Fargo & Company upped their price target on shares of MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a report on Friday, August 30th. Truist Financial raised their price objective on shares of MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a report on Friday, August 30th. Wedbush raised MongoDB to a “strong-buy” rating in a research note on Thursday, October 17th. Finally, Morgan Stanley increased their price objective on MongoDB from $320.00 to $340.00 and gave the stock an “overweight” rating in a report on Friday, August 30th. One investment analyst has rated the stock with a sell rating, five have given a hold rating, nineteen have assigned a buy rating and one has assigned a strong buy rating to the company’s stock. According to MarketBeat, MongoDB has an average rating of “Moderate Buy” and an average target price of $343.83.
Get Our Latest Stock Analysis on MDB
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Further Reading
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.
Click the link below and we’ll send you MarketBeat’s list of the 10 best stocks to own in 2025 and why they should be in your portfolio.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
LAS VEGAS, Dec. 2, 2024 — MongoDB, Inc. today at AWS re:Invent announced that a new cohort of organizations have joined the MongoDB AI Applications Program (MAAP) ecosystem of leading AI and tech companies. By lending their experience and expertise to MAAP, Capgemini, Confluent, IBM, QuantumBlack, AI by McKinsey, and Unstructured will offer customers additional integration and solution options, boosting the value customers receive from MAAP. Since it was launched earlier this year, MAAP has already made an impact, helping customers like CentralReach—which provides an AI-powered autism care and intellectual and developmental disabilities (IDD) platform—innovate with AI.
The MAAP Center of Excellence Team, a cross-functional group of AI experts at MongoDB, has collaborated with partners and customers across industries to overcome an array of technical challenges, empowering organizations of all sizes to build and deploy AI applications. The expansion of the MongoDB AI Applications Program follows the introduction of vector quantization to MongoDB Atlas Vector Search (which reduces vector sizes while preserving performance—at lower cost), as well as new integrations with leading AI and technology companies.
MongoDB is also collaborating with Meta on Llama to support developers in their efforts to build more efficiently and to best serve customers. Currently, both enterprise and public sector customers are leveraging Llama and MongoDB to build innovative, AI-enriched applications, accelerating progress toward business goals. In the coming months, MongoDB plans to implement turnkey mapping from its database to the LlamaStack APIs, empowering developers to deliver solutions to market more quickly and efficiently.
“At the beginning of 2024, many organizations saw the immense potential of generative AI, but were struggling to take advantage of this new, rapidly evolving technology. And 2025 is sure to bring more change—and further innovation,” said Greg Maxson, Senior Director of AI GTM and Strategic Partnerships at MongoDB. “The aim of MAAP, and of MongoDB’s collaborations with industry leaders like Meta, is to empower customers to use their data to build custom AI applications in a scalable, cost-effective way. By joining the MAAP partner network, Capgemini, Confluent, IBM, QuantumBlack, AI by McKinsey, and Unstructured are helping the program evolve to meet the ever-changing AI landscape, and offering customers an array of leading solutions.”
Launched in the summer of 2024—with founding members Accenture, Anthropic, Anyscale, Arcee AI, AWS, Cohere, Credal, Fireworks AI, Google Cloud, gravity9, LangChain, LlamaIndex, Microsoft Azure, Nomic, PeerIslands, Pureinsights, and Together AI—the MongoDB AI Applications Program is designed to help organizations unleash the power of their data and to take advantage of rapidly advancing AI technologies. It offers customers an array of resources to put AI applications into production: reference architectures and an end-to-end technology stack that includes integrations with leading technology providers, professional services, and a unified support system to help customers quickly build and deploy AI applications.
Because the AI landscape and customer expectations of AI continue to evolve, MongoDB has carefully grown the MAAP program—and the MAAP ecosystem of companies—to best meet customer needs. By working with AI industry leaders, MongoDB has gained a unique understanding of both the technology and implementation partners that can best help customers build AI applications, and has built the MAAP partner network accordingly.
New MAAP Partners Look Forward to Helping Customers Build AI Applications
A global consulting and technology services company, Capgemini offers integrated solutions for digital transformation, blending expertise with breakthrough technology. Confluent, meanwhile, is a cloud-native data streaming platform that allows users to stream, connect, process, and govern data in real time.
“Business leaders are increasingly recognizing generative AI’s value as an accelerator for driving innovation and revenue growth. But the real opportunity lies in moving from ambition to action at scale. We are pleased to continue working with MongoDB to help deliver tangible value to clients and drive competitive advantage by leveraging a trustworthy data foundation, thereby enabling gen AI at scale,” said Niraj Parihar, CEO of Insights & Data Global Business Line and Member of the Group Executive Committee at Capgemini. “MAAP helps clients build gen AI strategy, identify key use cases, and bring solutions to life, and we look forward to being a key part of this for many organizations.”
“Enterprise AI strategy is inextricably dependent upon fresh, trusted data about the business. Without real-time datasets, even the most advanced AI solutions will fail to deliver value,” said Shaun Clowes, Chief Product Officer at Confluent. “Seamlessly integrated with MongoDB and Atlas Vector Search, Confluent’s fully managed data streaming platform enables businesses to build the trusted, always-up-to-date data foundation essential for powering gen AI applications.”
Unstructured is the leading provider of ETL for LLMs, making it easy for enterprises to utilize their unstructured data with gen AI systems.
“Like MongoDB, we understand that data is essential to harnessing the power of gen AI,” said Brian Raymond, Founder and CEO of Unstructured. “We are excited to join the MongoDB AI Applications Program to bring our expertise in ingesting and preprocessing complex unstructured data for vector databases. The gen AI-ready data we continuously deliver and write to vector databases like MongoDB is essential to enabling our users to counter hallucinations, allowing the LLMs and AI projects that MAAP customers are working on to leverage sensitive, internal data while keeping models and projects up-to-date.”
Collaborating to Make an Impact with AI
Providing customers direct support from technical subject matter experts has been integral to MAAP’s success. Since the program’s inception, the MAAP Center of Excellence team—highly skilled AI experts from MAAP partners and groups across MongoDB—has worked with more than 150 organizations on a range of technical challenges, including model and technology stack evaluation, chunking strategies, advanced retrieval techniques, and the establishment of agentic workflows. Example projects include working on sound diagnostic-based maintenance recommendations for a large manufacturer, and customer service automations for companies across industries.
A recent example of how MAAP enables organizations to build with AI is IndiaDataHub, which is on a mission to build India’s largest market data and analytics platform.
Since the company’s founding, MongoDB Atlas has been the platform’s operational database for some of its key datasets, and earlier this year, IndiaDataHub joined MAAP to access AI expertise, in-depth support, and a full spectrum of technologies to enhance AI functionality within its analytics platform. This includes connecting relevant data in MongoDB with Meta’s AI models to perform sentiment analysis on text datasets.
“Data is the oil that will fuel the growth of the modern Indian economy,” said Pranoti Deshmukh, Chief Technology Officer at IndiaDataHub. “Working with MongoDB, the MAAP ecosystem, and Meta’s AI tools, we’ve been able to accelerate our AI strategy to make high-quality, timely data and analytics available to everyone in India who needs it. The professional support and deep AI expertise we’ve received through the MAAP program have been outstanding.”
“We are thrilled to see how many enterprises are leveraging our open source AI models to build better solutions for their customers and solve the problems their teams are facing everyday,” said Ragavan Srinivasan, VP of Product at Meta. “Leveraging our family of Meta models and the end-to-end technology stack offered by the MongoDB AI Applications Program demonstrates the incredible power of open source to drive innovation and collaboration across the industry.”
Another success story is CentralReach, which provides an AI-powered electronic medical record (EMR) platform that is designed to improve outcomes for children and adults diagnosed with autism and related intellectual and developmental disabilities (IDD).
Prior to working with MongoDB and MAAP, CentralReach was looking for an experienced partner to further connect and aggregate its more than 4 billion financial and clinical data points across its suite of solutions.
CentralReach leveraged MongoDB’s document model to aggregate the company’s diverse forms of information from assessments to clinical data collection, so the company could build rich AI-assisted solutions on top of its database. Meanwhile, MAAP partners helped CentralReach to design and optimize multiple layers of its comprehensive buildout. All of this will enable CentralReach to support initiatives such as value-based outcome measurement, clinical supervision, and care delivery efficacy. With these new data layers in place, providers will be able to make substantial improvements to their clinical delivery to optimize care for all those they serve.
“As a mission-driven organization, CentralReach is always looking to innovate on behalf of the clinical professionals—and the more than 350,000 autism and IDD learners—that we serve globally,” said Chris Sullens, CEO of CentralReach. “So being able to lean on MongoDBs database technology and draw on the collective expertise of the MAAP partner network—in addition to MongoDB’s tech expertise and services—to help us improve outcomes for our customers and their clients worldwide has been invaluable.”
The expansion of the MongoDB AI Applications Program builds on recent AI-related announcements from MongoDB.
In October, MongoDB announced vector quantization capabilities in MongoDB Atlas Vector Search. By reducing vector storage and memory requirements while preserving performance, these capabilities empower developers to build AI-enriched applications with more scale—and at a lower cost.
Outside of MAAP, since the start of the year MongoDB has built partnerships with more than 40 leading AI companies, enabling additional flexibility and choice for customers. Recent collaborations include those with Astronomer, Arize AI, Baseten, CloudZero, Modal, and ObjectBox. By working closely with its AI partners on product launches, integrations, and real-world challenges, MongoDB is able to bring a better understanding of AI to joint customers, deliver interoperability for end-to-end workflows, and to give them the resources and confidence they need to move forward with this groundbreaking technology.
To learn more about building AI-powered apps with MongoDB, please see MongoDB’s library of articles, tutorials, analyst reports, and white papers. And for more on the MongoDB AI Applications program, see the MAAP webpage.
About MongoDB
Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, MongoDB’s developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for a wide variety of applications, all in a unified and consistent user experience. MongoDB has more than 50,000 customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.
Source: MongoDB
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB (NASDAQ:MDB – Free Report) had its price objective raised by Loop Capital from $315.00 to $400.00 in a research report report published on Monday morning,Benzinga reports. The firm currently has a buy rating on the stock.
Other equities research analysts also recently issued research reports about the company. Wells Fargo & Company raised their price objective on MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a research report on Friday, August 30th. Wedbush raised shares of MongoDB to a “strong-buy” rating in a report on Thursday, October 17th. Citigroup upped their price objective on shares of MongoDB from $350.00 to $400.00 and gave the company a “buy” rating in a report on Tuesday, September 3rd. UBS Group lifted their price target on MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research report on Friday, August 30th. Finally, Needham & Company LLC upped their target price on MongoDB from $290.00 to $335.00 and gave the company a “buy” rating in a research report on Friday, August 30th. One investment analyst has rated the stock with a sell rating, five have assigned a hold rating, nineteen have issued a buy rating and one has issued a strong buy rating to the stock. Based on data from MarketBeat, the stock has a consensus rating of “Moderate Buy” and an average price target of $343.83.
View Our Latest Research Report on MDB
MongoDB Stock Performance
MDB stock opened at $325.15 on Monday. The company has a debt-to-equity ratio of 0.84, a current ratio of 5.03 and a quick ratio of 5.03. MongoDB has a 1-year low of $212.74 and a 1-year high of $509.62. The business has a 50-day moving average price of $284.45 and a 200 day moving average price of $270.10. The company has a market capitalization of $24.02 billion, a P/E ratio of -107.67 and a beta of 1.15.
Insiders Place Their Bets
In related news, CRO Cedric Pech sold 302 shares of the firm’s stock in a transaction dated Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total transaction of $77,387.50. Following the completion of the sale, the executive now directly owns 33,440 shares of the company’s stock, valued at approximately $8,569,000. This trade represents a 0.90 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through this hyperlink. Also, Director Dwight A. Merriman sold 3,000 shares of the business’s stock in a transaction dated Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total transaction of $768,750.00. Following the completion of the sale, the director now owns 1,131,006 shares in the company, valued at approximately $289,820,287.50. This trade represents a 0.26 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 23,600 shares of company stock valued at $6,569,819. Insiders own 3.60% of the company’s stock.
Institutional Investors Weigh In On MongoDB
A number of institutional investors and hedge funds have recently made changes to their positions in MDB. Atria Investments Inc grew its position in MongoDB by 1.2% in the 1st quarter. Atria Investments Inc now owns 3,259 shares of the company’s stock valued at $1,169,000 after acquiring an additional 39 shares in the last quarter. Cetera Investment Advisers increased its stake in MongoDB by 327.6% during the 1st quarter. Cetera Investment Advisers now owns 10,873 shares of the company’s stock worth $3,899,000 after buying an additional 8,330 shares during the period. Cetera Advisors LLC lifted its holdings in MongoDB by 106.9% during the 1st quarter. Cetera Advisors LLC now owns 1,558 shares of the company’s stock worth $559,000 after buying an additional 805 shares in the last quarter. Fulton Bank N.A. boosted its position in MongoDB by 7.7% in the 2nd quarter. Fulton Bank N.A. now owns 1,135 shares of the company’s stock valued at $284,000 after buying an additional 81 shares during the period. Finally, Harbor Capital Advisors Inc. grew its stake in shares of MongoDB by 26.2% in the second quarter. Harbor Capital Advisors Inc. now owns 3,579 shares of the company’s stock worth $895,000 after acquiring an additional 742 shares in the last quarter. 89.29% of the stock is owned by hedge funds and other institutional investors.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Articles
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news