Amazon zeroes out ETL for shifting Aurora and DynamoDB data into Redshift

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Amazon is making it easier to analyze transactional data held in its Aurora PostgreSQL and DynamoDB databases by avoiding the need for ETL routines when moving data from them into its Redshift data warehouse.

Aurora is Amazon’s relational database service built for its public cloud with full MySQL and PostgreSQL support. Aurora MySQL is a drop-in replacement for MySQL, the open-source RDBMS, and Aurora PostgreSQL is a drop-in replacement for PostgreSQL. MySQL and PostgreSQL are both open-source relational database management systems (RDBMS) for transactional data, with Amazon saying: ”They store data in tables that are interrelated to each other via common column values.” 

DynamoDB is Amazon’s fully managed proprietary NoSQL database intended for use with non-relational databases. Such databases store data within one data structure, like a JSON document, instead of the tabular structure of a relational database.

Redshift is Amazon’s cloud data warehouse for data analytics. It needs feeding with transactional data if its analytic processes are going to analyze that data. Normally ETL (Extract, Transform and Load) routines are used to select datasets from a source, transform them and move them into a target data warehouse. The zero-ETL concept is to do away with such routines by having the necessary integration functions built in to the source databases.

There are existing, generally available, zero-ETL integrations for Aurora MySQL and Amazon RDS for MySQL with Redshift, which enable customers to combine data from multiple relational and non-relational databases in Redshift for analysis. Amazon RDS (Relational Database Service) is a managed SQL database service which supports Aurora, MySQL, PostgreSQL, MariaDB, Microsoft SQL Server, and Oracle database engines.

Esra Kayabali, Amazon
Esra Kayabali

Amazon claims Zero-ETL integrations mean that IT staff do not have to build and manage ETL pipelines and operate them. AWS blogger Senior Solutions Architect Esra Kayabali says zero-ETL: “automates the replication of source data to Amazon Redshift, simultaneously updating source data for you to use in Amazon Redshift for analytics and machine learning (ML).”

You still need an ETL function but these AWS products now set up, operate, and manage the whole thing. Kayabali blogs: “To create a zero-ETL integration, you specify a source and Amazon Redshift as the target. The integration replicates data from the source to the target data warehouse, making it available in Amazon Redshift seamlessly, and monitors the pipeline’s health.”

Her blog tells readers how to “create zero-ETL integrations to replicate data from different source databases (Aurora PostgreSQL and DynamoDB) to the same Amazon Redshift cluster. You will also learn how to select multiple tables or databases from Aurora PostgreSQL source databases to replicate data to the same Amazon Redshift cluster. You will observe how zero-ETL integrations provide flexibility without the operational burden of building and managing multiple ETL pipelines.”

Amazon is here integrating its own databases with its own data warehouse. When the target data warehouse is owned and operated by one supplier and the source databases by others, partnerships with third parties are needed to get rid of the ETL pipeline development and operation burden. 

Bryteflow tells us: “The zero-ETL process assumes native integration between sources and data warehouses (native integration means there is an API to directly connect the two) or data virtualization mechanisms (they provide a unified view of data from multiple sources without needing to move or copy the data). Since the process is much more simplified and the data does not need transformation, real-time querying is easily possible, reducing latency and operational costs.”

Snowflake says it has zero-ETL data sharing capabilities across clouds and regions. It is partnering with Astro so that customers can orchestrate ETL operations in Snowflake with Astro using the Snowflake provider for Airflow.

CData says itsSync offering “provides a straightforward way to continuously pipeline your Snowflake data to any Database, Data Lake, or Data Warehouse, making it easily available to Analytics, Reporting, AI, and Machine Learning.”

You can learn more in Kayabali’s post.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Why safe use of GenAI requires a new approach to unstructured data management [Q&A]

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Large language models generally train on unstructured data such as text and media. But most enterprise data security strategies are designed around structured data (data organized in traditional databases or formal schemas).

The use of unstructured data in GenAI introduces new challenges for governance, privacy and security that these traditional approaches aren’t equipped to handle.

We spoke to Rehan Jalil, CEO of Securiti, to discuss how organizations need to rethink how they’re governing and protecting unstructured data in order to safely leverage GenAI.

BN: How does unstructured data differ from structured data?

RJ: At a high level, the difference is straightforward. Structured data is any data that lives in traditional row-column databases (i.e., relational or SQL databases, Excel documents or data warehouses) or has a predefined data model. This tends to include things like financial transactions, inventory information, and patient records.

Unstructured data is all the other data that doesn’t exist in spreadsheets and databases (often stored in non-relational or NoSQL databases or data lakes). It’s typically text-heavy and lacks the organization and properties of structured data — for example, all of the documents, emails, social media posts, web pages, and multimedia content that a company may have or own. It can also include all the regulations and policies that companies may need to adhere to, such as tax codes or insurance terms of coverage.

Today, about 90 percent of data being generated in enterprises is unstructured.

BN: How does this impact generative AI deployments?

RJ: In the past, companies really just mined their structured data to make business decisions. But GenAI is upending that. Most generative AI models work by analyzing unstructured data, such text data on the web, and provide outputs based on that data. Generative AI technologies employ this data to train models and build natural language processing capabilities. This causes a problem for organizations as the vast majority of their data management solutions were built for structured data.

The issue is that the industry has not put the same resources into developing technologies and strategies for managing unstructured data like they have for structured data. Lots of organizations struggle to even identify all the locations where their unstructured data might live — across which shared drives, cloud systems, applications, and so on. And once it is identified, unstructured data requires different, more complex management and specialized techniques in order for data teams to extract meaningful insights and patterns from it — techniques such as natural language processing, text mining, and machine learning.

BN: Why is unstructured data so challenging to manage and secure?

RJ: There are a number of factors at play. The biggest issue is simply volume and variety. There’s massive amounts of unstructured data within organizations and it comes from a diverse range of sources, such as emails, documents, social media posts, and multimedia files. This makes it difficult for teams to keep track of and enforce consistent governance and security policies across the organization.

Uncontrolled access and sharing is another hurdle. Once created, unstructured data tends to grow quickly across various systems, devices, and cloud services as people copy, modify, manipulate, and share the content. Because of this, it can be very difficult to keep track of where data came from and who should have access.

Unstructured data also tends to live across many siloes and ownership is often fuzzy. The data is frequently created and managed by different departments or individuals within an organization, leading to data silos and ambiguity around data ownership and accountability. While structured data is more likely to have known ownership within an organization due to understood security or cost implications, a company’s unstructured data is often either sequestered for legitimate reasons (e.g., upcoming commentary for an acquisition) or for less desired causes (e.g., political boundaries between divisions).

Last, unstructured data comes in a diverse number of formats. Whereas structured data has collapsed into a small set of universal standards, SQL being a principal one, unstructured content systems have a multitude of formats and legacy patterns. The tools needed to manage these formats in a unified way are unique and require a commitment from the organization to deploy and use them.

BN: What should organizations do to safely use unstructured data for GenAI?

RJ: Managing unstructured data for generative AI is possible if enterprises acquire the seven key capabilities:

  • Discover, catalog, and classify unstructured data: Automatically discover, catalog, and classify files and objects on the fly, which are essential for GenAI projects.
  • Preserve access entitlements of unstructured data: Maintain existing enterprise entitlements at source systems to ensure that only authorized users access relevant data via GenAI prompts.
  • Trace the lineage of unstructured data: Understand data mapping and flows from source to end results, showing how the data moves from unstructured data systems to vector databases, to LLMs, and finally to endpoints.
  • Curate unstructured data: Automate the labeling or tagging of files to ensure that only relevant data with associated context is fed to GenAI models, thereby providing accurate responses with citations.
  • Sanitize unstructured data: Classify and redact or mask sensitive data from files that GenAI projects use.
  • Focus on the quality of unstructured data: Emphasize the freshness, uniqueness, and relevance of data to prevent unintended data usage in GenAI projects.
  • Secure unstructured prompts and responses with pre-configured policies: Detect, classify, and redact sensitive information on the fly, block toxic content, and enforce compliance with topic and tone guidelines.

Image credit: SergeyNivens/depositphotos.com

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Generally AI – Season 2 – Episode 3: Surviving the AI Winter

MMS Founder
MMS Anthony Alford Roland Meertens

Article originally posted on InfoQ. Visit InfoQ

Transcript

Roland Meertens: Let me start with a fun fact and that is that did you know, Anthony, that in 1966 Seymour Papert created the Summer Vision Project?

Anthony Alford: Oh, I did not know that.

Roland Meertens: And his idea was to create a significant part of a visual system, which can do pattern recognition.

Anthony Alford: Okay.

Roland Meertens: In 1966. And they did break this down into sub-problems, which can be solved independently. Because doing pattern recognition in one summer in 1966 is a hard task. So examples of a sub-task is dividing the image into regions such as likely objects, likely background area, and chaos. And I don’t know what class chaos is, but I want more of this.

Anthony Alford: But do you?

Roland Meertens: Yes. As I said, I was just reading the technical review today, but what they wanted to do is view shape and surface properties, and extensions they thought of for the end of summer, like in August, would be handling objects with complex patterns and backgrounds such as a cigarette pack with writing and bands of different color, tools, cups, et cetera, which shows that in that time it was still normal to smoke and have your cigarettes packages in the workplace.

Anthony Alford: I think they probably came with your desk when you got hired.

Roland Meertens: Yes, it’s such an interesting look at how life was in 1966. The other fun fact I have by the way, is that in 1980, this person, Seymour Papert, wrote a book Mindstorms: Children, Computers and Powerful Ideas, where his thesis was that children can learn faster and be more social when they’re using computers for learning. And this book is what Lego named their robotics kits after.

Anthony Alford: Very cool. Those things were really fun to play with too.

Roland Meertens: The Lego Mindstorm robots are amazing. You get any project done quickly instead of having to play around with installing operating systems in Linux and whatever. It’s amazing.

Anthony Alford: Very cool. I did not know that.

AI Winter [02:26]

Roland Meertens: Welcome to Generally AI, season two, episode three, where I, Roland Meertens, will be discussing the AI Winter with Anthony Alford.

Anthony Alford: That’s right, Roland. I’m really looking forward to it.

Roland Meertens: Yes. So I found an AI algorithm which was invented before the first AI Winter, which is still relevant today. And you actually looked up all about AI Winter. So shall we start with what you did?

Anthony Alford: I shall. Hopefully I’m not going to steal any of your thunder because a lot of the technologies that have come and gone through the AI Winters are still here with us and I’m going to weave that into the story. And by the way, when we’re recording this in July and I’m in the Northern Hemisphere, it’s really not winter, it’s summer. It’s pretty hot. So I kind of would like a little winter at this point.

Roland Meertens: Even in London, the weather is warm today, which is very rare.

Anthony Alford: It’s a high of like 25, 27 degrees centigrade. You know we laugh at that. But anyway, I digress. Those of our fans who are still following us after the first season may remember when we talked about Alan Turing and Claude Shannon and how they both made early contributions to AI. And you probably remember that they did this a very long time ago. The term artificial intelligence was coined in the 1950s.

Roland Meertens: Where they also wanted to invent AI over the summer. I see a pattern.

Anthony Alford: The Summer of AI followed by the AI Winter. Not to foreshadow, right?

Roland Meertens: Yes, indeed, indeed.

Anthony Alford: So when people talk about AI, these days of course, they mean neural networks. They mean deep learning. They mean ChatGPT. But the key technology is the neural network. That was not always the case. Actually throughout the years, AI researchers have explored a lot of technologies like symbolic logic, like trees and search, but neural networks have been around for just about as long as AI has been around. And hopefully, not to steal your thunder, but that it was there before the first AI Winter and is still around.

Roland Meertens: I will talk more about your search algorithms.

Rosenblatt’s Perceptron [04:33]

Anthony Alford: Ah, okay. Well, in 1957, a scientist named Frank Rosenblatt developed the perceptron. And I probably don’t have to explain neural networks and perceptrons to our audience, but why not? We’ve got time. A perceptron is a mathematical model of what a single biological neuron does. It has several numeric inputs. It computes a weighted sum of those inputs, so each input has a different weight. And then it thresholds that weighted sum to decide whether to fire its output. The magic though is that you don’t have to figure out yourself by hand or by thinking what the weight should be. You teach the perceptron what the weight should be using supervised learning.

Roland Meertens: Did he already look at backpropagation or it is only a single neuron, right?

Anthony Alford: That’s right. He did not use backpropagation, but his learning algorithm was: you put the inputs in, you knew the expected output, you got the actual output and you could difference that. And then there’s a simple update algorithm to update the weights.

And then you do this with a bunch of input-output examples. Do it iteratively, adjust the weights until it gets as correct as you can. And Rosenblatt actually wrote a software implementation of this perceptron. I think he had a bank of perceptrons. It could classify a 72-by-72 pixel image. They actually used patterns on a punch card for the images, the image inputs.

Roland Meertens: Yes.

Anthony Alford: He did a set of experiments and in one experiment he showed the perceptron could distinguish between the letter E and the letter X, even if they were rotated, and do it about 95% of the time, which is not bad for 1960.

Roland Meertens: Yes, even when it’s rotated, quite big grids on punch cards?

Anthony Alford: I don’t know how big an angle of rotation, but probably a few degrees, I think.

Roland Meertens: Yes.

Anthony Alford: But you would think in 1960 if they could do this kind of computer vision, why didn’t we get ChatGPT in the seventies and AGI before Y2K?

Roland Meertens: Compute and not enough GPUs.

Anthony Alford: Well, yes, that was one reason. But also your guy, Seymour Papert, and Marvin Minsky, they published a paper in 1969. They had done research showing a severe limitation of perceptrons.

Roland Meertens: Oh, I think I know what it is.

Anthony Alford: You know what it is? It can only learn a line. Technically speaking, it can only discriminate between classes that are linearly separable by a hyperplane. So the classic example is: a perceptron cannot learn the XOR function, the exclusive-OR function.

Roland Meertens: Yes.

Anthony Alford: A lot of people gave up on perceptrons. Now, of course, perceptrons weren’t the only thing in AI: there were a lot of other rumblings in the AI field. Apparently, I didn’t know this, there was that computer vision project in the ’60s, but there was also a machine translation project in the ’60s that was not meeting expectations.

So this has that classic story, which is too good to be true, but it’s illustrative. You could put in the English text, “The spirit is willing, but the flesh is weak”, translate that to Russian and then translate the Russian back to English and get, “The vodka is good, but the meat is rotten”.

Roland Meertens: Yes, I heard of that example before. It’s so funny.

Anthony Alford: Yes, probably it’s not true. The good stories are never true.

Roland Meertens: Yes, it’s good still. It could be true.

The First AI Winter [08:09]

Anthony Alford: It could be true. So in 1966, the US government pulled its funding from machine translation. In 1973, the British government published a report called the Lighthill Report that criticized the lack of progress in AI and they subsequently cut funding for AI research in their universities. In the US, DARPA was funding a lot of AI. In 1969, they changed their policies to no longer fund undirected research, but instead to focus on concrete projects with obvious military applications. And after the Lighthill Report, DARPA didn’t fund a lot of AI research either. So the end of AI Summer had come, it was AI Winter.

Roland Meertens: Yes, the end of the free money.

Anthony Alford: Yes. Now, that doesn’t mean there was no AI research going on at all. In fact, some people dispute the fact that there actually was an AI Winter based on things like: people enrolled in AI organizations like special interest groups and so forth. But the funding was definitely dried up.

Roland Meertens: So there were still people studying it or students starting or being interested in AI.

Anthony Alford: Right. There were of course labs like the Stanford AI Lab and Carnegie Mellon, but the scale and the funding globally had been reduced.

Roland Meertens: Yes.

Anthony Alford: And in fact, some people were actually still working with perceptrons, but now they were calling this a more general term of artificial neural networks. In their paper, Minsky and Papert did say that neural networks could learn exclusive-OR and other nonlinear functions if you had multiple layers of perceptrons.

Rosenblatt just had a single layer, but if you had multiple layers where the output of perceptrons are the input of the next layer of perceptrons, you could learn nonlinear functions. In fact, you only need a single hidden layer to learn exclusive-OR.

Roland Meertens: Yes.

Anthony Alford: But nobody knew how to do the training algorithm for them.

Roland Meertens: Oh, okay. So you can easily show that with the right weights, you can solve XOR, right?

Anthony Alford: Yes.

Roland Meertens: But nobody knew how to do backpropagation.

Backpropagation Brings Back AI [10:18]

Anthony Alford: Right. And that’s in fact what happened. So around 1985, a group of researchers, including Geoff Hinton, which is a name you may know, they came up with an algorithm for training these multi-layer perceptron networks: gradient descent with backpropagation, AKA backprop. What’s funny is Hinton’s paper literally says that the mathematical derivation of it can be omitted by the reader who finds such derivations tedious.

Roland Meertens: Okay.

Anthony Alford: Which I think that’s definitely me, so I’m going to skip over that. I think probably, again, most of our listeners are familiar with gradient descent and backprop. And in fact, those ideas were not new in 1985, it’s just that nobody had applied them to neural networks until then.

Roland Meertens: Yes.

Anthony Alford: Neural networks were back. It was the Summer of AI again.

Roland Meertens: Yes. What are we going to do now?

Anthony Alford: Well, in 1989, a researcher named Yann LeCun, again, another name you may know, he was using multi-layered neural networks to recognize handwritten digits, 16 by 16 pixel images, and he was getting 95% accuracy. So AI was back.

Roland Meertens: Pretty good. I assume that he had applications for things like sorting letters in the post office.

Anthony Alford: Again, you’re familiar with this. Yes, he was doing digit recognition for, I think, post office applications.

Roland Meertens: Yes, interesting.

Anthony Alford: I remember the early ’80s, and it was a great time in so many ways, Men Without Hats and the “Safety Dance” and other things. And there was an AI boom. There were things like expert systems, which were very popular. And in Japan, the Japanese government had started to spend a lot of money on their Fifth Generation Computing project. And in the US, DARPA was doing something similar. But guess what happened?

Roland Meertens: I’m feeling that there will be a second AI Winter.

The Second AI Winter [12:12]

Anthony Alford: Second AI Winter, 1990s, rock and roll was over and it was grunge and flannel shirts because of the AI Winter being so cold. Yes, AI Winter. And for neural networks in particular, people were running up to limits there. The backprop training algorithm worked, but it didn’t work great on very deep networks, networks with lots of layers. I’m sure you know what the problem was.

Roland Meertens: I’m assuming there will be vanishing gradients or exploding gradients.

Anthony Alford: That’s exactly right. And you had mentioned hardware earlier: it was slow, it was the ’80s.

Roland Meertens: Also, the amount of data. How many data sets can you fit on a floppy disk?

Anthony Alford: Exactly.

Roland Meertens: This is even before the small floppy disks.

Anthony Alford: Yes, we had the eight-inch floppy disks. But yes, you’re exactly right. So the gradients were a problem. The hardware was a problem, and the data sets were a problem. So again, AI Winter in general and for neural networks in particular. So let’s skip over the ’90s and the early 2000s. And let’s get to the part of the story that was—-actually, feels like it wasn’t that long ago, but if you think about it, it was 2012.

Roland Meertens: It’s already been years.

Anthony Alford: 2012.

Roland Meertens: Yes, yes.

Anthony Alford: It’s 12 years, Roland.

Roland Meertens: Yes, yes. Time flies.

The Endless Summer of Deep Learning [13:30]

Anthony Alford: Time flies. So Hinton and some of his colleagues at Toronto had published their results. They were using the MNIST dataset of handwritten digits, which is similar to what LeCun was using in ’89. They were getting better than 99% accuracy. But the gold standard, of course, of recognizing objects and images by then was ImageNet. ImageNet had come. And there’s the dataset problem solved, right?

Roland Meertens: Yes.

Anthony Alford: Millions of images, 256-by-256 pixels.

Roland Meertens: Yes. Of loads of different types of objects. Fei-Fei Li made this dataset.

Anthony Alford: Exactly. And certainly kicked off the modern era of deep learning, I think. And then using that, one of Hinton’s students trained a neural network that got about 85% accuracy compared to the previous record of less than 75%. So that was AlexNet, right?

Roland Meertens: Yes.

Anthony Alford: And so those two nets, ImageNet and AlexNet, had really kicked off the age of deep learning.

Roland Meertens: Yes.

Anthony Alford: And it’s been endless summer ever since.

Roland Meertens: Yes. It’s crazy that we keep seeing that the more data you give these networks and the bigger you make these networks, the more they can learn.

Anthony Alford: And in fact, I think it was OpenAI that—pretty famously—their laws of scaling, that they’ve plotted that it’s a power law. As you increase, as you increase model size, as you can increase training, compute, the training loss is predictable.

Roland Meertens: But would you then say that the second AI Winter lasted till 2012?

Anthony Alford: Well-

Roland Meertens: When would you say that the second AI winter started and ended?

Is Winter Coming? [15:09]

Anthony Alford: These things are always kind of like…it’s not a bright line like the solstice or whatever, but it’s certainly the 1990s and the early 2000s are considered to be one AI Winter. And so you might start thinking, well, when is the next AI Winter?

Roland Meertens: Yes. When are you predicting that we will all be fired and need to find a new job?

Anthony Alford: What if I told you that it already happened?

Roland Meertens: Oh. Tell me more.

Anthony Alford: Of course, as I said, since 2012, it’s been nonstop AI and we’re so used to hearing about it now all the time. But apparently, in late 2018, early 2019, people already were predicting another AI Winter. There were headlines and think pieces talking about deep learning has reached or would soon reach its limits.

Roland Meertens: Yes.

Anthony Alford: And this talk continued through 2019 and on into 2020. And in fact, if you look at things like the number of AI publications, according to Wikipedia, for the years 2021 and 2022, they actually did drop: the number of publications dropped like 30%.

Roland Meertens: Interesting.

Anthony Alford: But then of course, was that AI Winter or was it a global pandemic?

Roland Meertens: Yes. Although I do feel that in 2018, people were very excited about seeing the performances of networks go up a lot every year and every year.

Anthony Alford: Yes.

Roland Meertens: And we basically said at that time, perception is solved. It’s a solved problem.

Anthony Alford: More or less. So what’s interesting is, well of course you know that ’21 and ’22 was really a blip. December of 2022 was the release of ChatGPT. And since then we’ve had almost two years of constant AI Summer, we have had a Heat Wave of AI, but people are still predicting another AI Winter any day now.

Roland Meertens: Yes, it can’t always go up.

Anthony Alford: Exactly.

Roland Meertens: Unless one day we all work in AI. Everybody is working at an AI company nowadays.

Anthony Alford: And in fact, people like LeCun are saying things like: these LLMs, while impressive, they will not give us AGI, they’re not general intelligence.

Roland Meertens: Yes.

Anthony Alford: LeCun has been pretty vocal about that. Some people disagree, some people think LLMs can do it. But back to AI Winter, you’ve probably heard of Rodney Brooks.

Roland Meertens: Yes.

Anthony Alford: He’s the guy that founded iRobot. He’s been working in AI for my entire lifetime.

Roland Meertens: Yes.

Anthony Alford: He has a blog where in 2018 he made a bunch of predictions about future technological advances, things like autonomous vehicles and space flight and AI. And he updates these every year with how has he done.

What’s interesting, in 2018, so around the time people were predicting the limits of deep learning, predicting a new AI Winter, he wrote on his blog, he predicted that the next big thing in AI would happen no earlier than 2023 and no later than 2027. And he said, “I don’t know what this is, but whatever it turns out to be”, this is a quote, “whatever it turns out to be, it will be something that someone is already working on and there are already published papers about it”. He said that in 2017.

Roland Meertens: Yes. So what did he refer to as the next big thing?

Anthony Alford: In January of this year, he said it’s ChatGPT.

Roland Meertens: Yes.

Anthony Alford: And as we know, we talked about last season, the T in GPT, the transformer, the paper was published in 2017.

Roland Meertens: ’17, yes, indeed. Yes.

Anthony Alford: So he’s taking a victory lap on that prediction.

Roland Meertens: Yes, but it’s also an easy prediction to say-

Anthony Alford: Well, that’s what he said.

Roland Meertens: “There will be the next big thing in the next five years and someone is already working on it”.

Anthony Alford: He said that exactly. He said, “I’ve seen this so many times. I know that the next big thing is always going to be: somebody’s working on it and five years later…” What’s interesting, in that same blog post in January of this year, he predicted another AI Winter. He said AI Winter is just around the corner and it’s going to be cold.

Roland Meertens: Yes. How cold is it going to be?

Anthony Alford: I don’t know. Personally, I don’t have a prediction. I just know that people like that, I obviously respect Brooks a lot and respect his prediction. On the other hand, this is sort of like that thing about: “So-and-so has predicted nine of the last five recessions“. It’s very easy to predict there’s an AI Winter if you just say: it’s going to happen next year, it’s going to happen in a couple of years, you’ll eventually be right.

Roland Meertens: Yes, it’s true, true. And also there will always be one person correct and that’s the person you talk about. Survivor bias.

Anthony Alford: Right. But again, he’s been through this so many times.

Roland Meertens: Yes.

Anthony Alford: He’s probably got good reasons for thinking it.

Roland Meertens: How was it for you, Anthony, to live through the AI Winter?

Anthony Alford: What’s interesting is I was doing my own grad school research in, it wasn’t AI, it was intelligent systems, intelligent robotics in the late 1990s, which was certainly during one of those winters.

Now, what’s interesting is in our work, we were inspired a lot by Rod Brooks, and we were doing a lot of stuff like behavior-based robotics, which was very different from what we contrasted with traditional AI. Things like: tree search and perception and modeling, 3D world modeling and planning and stuff like that. It was fun. I enjoyed it. I was not doing things like neural networks. Although what’s interesting, somebody else in our lab was doing neural networks.

Roland Meertens: Yes.

Anthony Alford: And I talked with him a few years ago and he said, “It’s great. It’s coming back and my stuff is still relevant”.

Roland Meertens: Yes, I must say that I learned about neural networks in university in 2009, 2010. And then as students, we kind of acknowledged this is outdated technology, this is not state of the art, you shouldn’t use this too much. And then in 2014, of course, you got AlexNet. So we were like, “Oh, okay, interesting. Good on the teacher for teaching us neural networks, but let’s see where it goes”. And it kept blowing up and kept blowing up. Interesting experience.

Anthony Alford: So that’s my weather report on the AI Winter.

A Star Is Born [22:28]

Roland Meertens: All right. So what I wanted to do with you, Anthony, is take a look at an AI algorithm, which was invented before the first AI Winter and which is still relevant today. So you already talked about neural networks, which was very much up, down, up, down. But I was thinking about what are examples of what people consider to be AI, but which kind of pass the test of time where people can always use it, it’s always relevant, it’s always good, it’s always interesting.

And I think that the thing I want to talk about is A* search, and I think most listeners are familiar with the A* search algorithm. People probably heard of it at least. And I think that alone shows that it is an everlasting algorithm. That even if you are not into AI, you are still familiar with A*.

Anthony Alford: I’ve heard of it.

Roland Meertens: Yes. You probably had to implement it for some coding tests. The one thing which I also find interesting about AI in general is that in the past, people would say, “Oh, search is AI and world modeling is AI”. Nowadays, you wouldn’t really say that search algorithms are pure AI. Would you? Or if you talk about chess algorithms or something, you wouldn’t really say, “Oh, it’s AI”, it’s more like computer science.

Anthony Alford: `Yes. I tried to find the quote, and I know I’m not imagining it, but I couldn’t find it. But there’s a quote that’s something like: AI just means something that computers aren’t supposed to be good at. And once computers become good at it, it’s no longer AI.

Roland Meertens: Yes. I think I always summarized it as “once it works, it becomes computer science”.

Anthony Alford: Oh, that’s awesome. Okay.

Roland Meertens: Yes, this is one of these algorithms which became so good, it became computer science. So in terms of history, do you know when A* was invented?

Anthony Alford: I don’t. I’m guessing probably the 1950s or ’60s though.

Roland Meertens: Yes. So it’s invented in 1968 by Peter Hart, Niels Nielsen, and Bertrand Raphael of the Stanford Research Institute. I will also put the paper in our show notes so you can read it yourself. And what’s also interesting is that it was invented for Shakey the Robot for his path planning. I’ll also put a PDF of that report in the show notes.

I read both the original paper and looked through the report for Shakey the Robot. And what disappointed me was that in the report for Shakey the Robot, they just mentioned using a breadth-first search.

Anthony Alford: No A*?

Roland Meertens: So that’s what I find interesting is that everybody always says, “Oh, A*”, Wikipedia says, “Oh, it’s invented for Shakey the Robot”. But if you look at the report, I think at some point they invented it and then they already thought that even A* was too slow to do a grid-based search.

Anthony Alford: Interesting.

Roland Meertens: So if you look through the paper, what I really like about papers which are this old, is that they have very smart inventive solutions for problems. So they’re not doing a grid search you would do nowadays. You would divide the room into tiles of 10 by 10 centimeters and navigate between those tiles.

They have something where they define the obstacles and then they have the tangent towards these obstacles so that the robot goes in the most optimal path around these obstacles and only plans from object edge to object edge. So the amount of search space becomes super low. So they don’t even need to think of A* anymore in the reports.

Anthony Alford: With compute and other constraints at the time, they probably had to do a lot of these kinds of optimizations.

Roland Meertens: Yes. And also what you mentioned before in terms of representing images on punch cards, I would love to do more episodes around that, it’s such an interesting topic. Maybe next season we have more time for that.

The other thing to note about A* is that it is similar to Dijkstra’s algorithm, which we discussed as a fun fact at the start of the last episode. But A*, the big difference is that A* is really point-to-point and not point-to-world like Dijkstra where you calculate the distance to the entire world.

Heuristic: the Key Ingredient [26:49]

Roland Meertens: And the thing I really like for listeners who don’t really know, A* adds a heuristic. So you have a heuristic on how close are you to actually getting closer to a solution. So if you’re going from point A to point B, if you’re getting closer to point B, that’s probably worth a point which you want to prioritize investigating first rather than going back to the point.

Imagine that you’re navigating from Amsterdam to Paris, as a human, you just follow your finger towards Paris, you know where to go and that’s what you explore first. And then if you find that some road is blocked, then you backtrack and you start searching around it. And here, what’s cool is that the better your heuristic is, the more you can prune away from your solution space.

Anthony Alford: Right. Didn’t this become really the core of a lot of chess-playing engines as well?

Roland Meertens: Yes. And that’s what I really liked, the versatility of the algorithm is super big. I’ve seen it being applied in Super Mario where people had a heuristic on how to play Super Mario. They would expand the game states and then pick the state where Mario was the closest to the right boundary or something or had the most points, whatever. So I’ve seen it being used there.

For chess, I think they use these trees where you have some kind of probabilistic tree structure where you keep expanding, but you do keep estimating how many points you have, what your estimated win chance is. But yes, they are super versatile. You have loads of different applications and not just mapping.

And even when there is mapping, it’s still, I think, one of the best algorithms. Whenever I have to program a search algorithm for anything, I try to think of: how can I add a heuristic to this so I know that I’m closer to getting to the actual solution.

Anthony Alford: That makes sense. What’s interesting is we could probably do another episode on A* versus something like reinforcement learning and the parallels there. Because basically what it is, right, how do you pick the next best action?

Roland Meertens: Well, so it’s a combination between how good are you or what is your state and where do you probably want to be and can you keep expanding everything until you get closer to it? The one thing which I also, as I said, it’s really an everlasting algorithm.

So one thing which people probably noticed when they ever visualized or implemented A* is that A* can be slow if your heuristic isn’t good or if you just have a lot of space to explore. And you can see this if you go to the Wikipedia article, I think they have a visualization of expanding a search horizon, not even a very big maze, but you do still keep expanding a lot of states. So I would say that if someone plans a route, I don’t know, from one end of America to another end of America, you and I would just say, “Let’s go on the highway”.

Anthony Alford: Yes. Interstate 40.

Roland Meertens: Interstate 40. So that’s, I guess, your heuristic. But for mapping companies or for graphs, there is an alternative, right? Contraction hierarchies is an example where you contract parts of the graph. So you can ignore the smaller streets when you have to travel a large distance. So you just navigate a graph on a higher level and then navigate back down onto the smaller streets. So there are alternatives.

Anthony Alford: So you may not have studied this part, but how does it prevent getting into a cycle?

Roland Meertens: Oh, you keep track of what you already expanded.

Anthony Alford: I see, okay.

Roland Meertens: Yes, you keep track of which nodes you already visited. That’s what I generally do. So because you know that A* found the fastest path towards a certain spot, you also don’t have to expand it anymore afterwards. It’s either expanded or visited, one of two works.

Anthony Alford: Okay, nice.

An Evergreen AI Algorithm [30:36]

Roland Meertens: Yes. So just going back towards the everlasting aspects. As I said, there are for mapping nowadays, you would say that since 1968, there must have been better algorithms invented, right? It’s a long time.

But what I like is that sometimes I talk to people working at map companies and whenever they have to plan very specific routes, for example, imagine that you are a truck who can’t go over certain bridges, maybe you are a truck which has to ride on roads which are minimum this wide. Maybe you have customers who have a sports car and they want to drive over nice and windy roads with great views.

There’s loads of different customers who have demands for which roads they can take. And that means that once you have a lot of these different things to factor in, these companies just switch back to A* because this algorithm can always deal with any kind of constraints about what nodes you can and cannot expand to.

Anthony Alford: That’s really cool. I never thought of that. It’s that heuristic that you have to have lets you essentially guide your result so that it meets your constraints or your desires.

Roland Meertens: Yes, the heuristic can be anything you want. So in the case of mapping, you would probably take a distance heuristic. That makes a lot of sense.

Anthony Alford: But you can also add the windiness or the width or something of the road?

Roland Meertens: Yes, I guess that could be something where you say, for example, I think the trucks is a better example where if you have a pre-computed graph, a contracted graph for the highway, if there is one edge in your graph, you have to take the minimum part of your graph.

If the maximum weight of one bridge is two tons, this entire edge becomes unusable for your truck. But with A*, you simply say, “Can I go over this edge with my truck? No? Okay. Well, then I won’t expand this node”. So you have one algorithm which can just handle everything.

Anthony Alford: I see. That’s pretty awesome.

A* in Games [32:46]

Roland Meertens: It is pretty awesome. The last thing which I want to encourage listeners to take a look at is a problem which Factorio had. Do you know the game Factorio?

Anthony Alford: Is that one of the ones with the snakes? Which one is this?

Roland Meertens: No.

Anthony Alford: That’s Slither.

Roland Meertens: Slither.io. Yes. No, Factorio is a factory building game.

Anthony Alford: Oh, okay.

Roland Meertens: Which is extremely addictive to people like me. If you enjoy automating things and solving problems, like most people who are programmers will, then Factorio is like crack. It is absolutely insane how addictive this game is.

Anthony Alford: Well, I’m going to stay away from it then.

Roland Meertens: Yes, smart. Yes. Sometimes I manage to convince friends to download the game or buy the game and then a week later you see them with black dots around our eyes being like, “I haven’t slept in a week”. It’s insane how this game just grips people.

Anyway, the goal of the game is that you want to build a big factory and launch a rocket, but you have aliens on the planet which you at some point can start shooting. So once you shoot an alien at a big distance from your factory on the landscape, the alien tries to plan a path to your base to start attacking whatever attacked him.

And Factorio faced the problem that this pathfinding on these massive maps can be slow, especially if you have to plan around a big lake or something like that. So they already implemented A* and the heuristic was simply the distance heuristic with what is the path around the lake.

So A* was too slow, but you can modify the heuristic. And the heuristic is basically you want to get quickly back to the place where the fire originated from, where you want to go to. So what they did is they had a second search algorithm which would search from the place they want to attack back to the aliens over all the tiles they deemed walkable.

So they would divide the world up into bigger tiles and then they would just calculate: can I cross this tile or not, which is faster than having to do this for a massive map. So they have this very global idea. As I said, if you want to go from Amsterdam to Paris, you know roughly where Paris is, so you can really quickly expand in this direction. Or if you have to go around the lake, you have to go around the lake. It doesn’t make sense to start searching away from the lake.

So they modified the heuristic, which again, most people set to the Euclidean distance, but instead, they calculate a rough distance for each tile back to the place they want to go. And that way they can nudge the search very quickly in the right direction. And then they just still use a normal A* algorithm with this new heuristic.

So I will put the link to the blog in the show notes so you can read it yourself and see the visualizations. It’s extremely cool to see how you can modify A* and how much flexibility you have with it.

Anthony Alford: It’s like the Swiss army knife of algorithms.

Roland Meertens: Yes. It genuinely is a Swiss army knife, which I love implementing. It’s always fun. It’s always good. It’s easy and extremely efficient. And you can build in your own nudges and heuristics very fast. I love it.

Anthony Alford: You’d think they would’ve called it A++ or something.

Roland Meertens: Yes. I think the reason for calling it A* was that the star was meant to indicate that this was optimal.

I know there is something around where in the paper they mentioned that this is the optimal way to plan a path, that it always finds an optimal route. And then another researcher said, “No, but you haven’t thought of this constraint”.

So they published a paper about how that’s not the case, that you should maybe modify something in the order you expand nodes. But then later, another researcher published a paper showing that, “Oh, but it actually works”, like there was a mistake in the other paper. So there’s a lot of drama around A*.

Anthony Alford: I’m going to have to read up on it. I love drama.

Roland Meertens: Yes, I mean drama in the most academic sense where someone finds a counter example.

Anthony Alford: Well, it’s the old saying that in academia, the fights are so brutal because the stakes are so low.

Roland Meertens: Yes, that must be true. Any last questions about A*?

Anthony Alford: No. As you can guess, I had heard of it and was familiar with it, but I wasn’t aware of how useful it was or how versatile and how long it had been around.

Roland Meertens: Yes, it’s a true winner of the Land before AI Winters. Always sticking around, always there, always trustworthy, always fast.

Anthony Alford: Just like the mammals after the dinosaurs went extinct, huh?

Roland Meertens: Yes. I don’t know how they came about because breadth-first search is also still around. It’s just amazing that there’s such a versatile or easy to implement algorithm for this case.

Anthony Alford: Sometimes a simple thing is great.

Words of Wisdom [37:54]

Roland Meertens: It’s exactly what you need. So words of wisdom. Did you learn anything new this podcast? Or did you learn anything yourself recently?

Anthony Alford: In one of those strange coincidences, I’d finished [reading] my last book and picked up an old book, an old favorite, The Moon is a Harsh Mistress by Robert Heinlein, one of the great classics of sci-fi. And you may know that, spoiler alert, one of the main characters is a self-aware computer.

And this book was written in 1966. What’s interesting is in the very beginning of the book, part of the description of the computer is that it has “associative neural networks” attached to it. So I thought that was kind of interesting and fun.

What also is interesting, in this book, the computer learns how to do essentially a CGI version of a person. They made up a fictitious person and the computer creates essentially a real time artificial video, photorealistic video of the person along with voice and so forth, which has never been done before in [the book’s] universe.

Roland Meertens: Yes and it’s still not done.

Anthony Alford: Well, yes, we’re getting there. What’s funny, I don’t know if you’re an aficionado of classic sci-fi like Robert Heinlein. Robert Heinlein…in his books in the 1940s and 1950s, he had a lot of space technology, like nuclear-powered, faster than light rockets, but all the navigation is done with slide rules and things like that. So he missed out on computers early, but by the ’60s, he basically just jumped to: “Okay, computers are now self-aware”.

Roland Meertens: Interesting. Yes, I love the old sci-fi where if you read it now, you don’t realize that these people didn’t know what we have now.

Anthony Alford: There’s a couple of old Heinlein stories from the ’50s and ’60s where characters essentially have cell phones and it’s just like a casual thing.

Roland Meertens: Yes. If you read it now you just think, “Oh, of course they have a cell phone”, but no, the guy has to invent cell phones in his head.

Anthony Alford: Exactly. Anyway, I don’t know if that’s words of wisdom, but it’s something that I had not remembered.

Roland Meertens: It’s a good fun fact. The one thing which I started looking at yesterday was the history of gliders and airplanes and parachutes. So the one thing which I discovered is number one, the first aerostatic flight was carried out by the Montgolfier brothers at Versailles in 1783.

Anthony Alford: Yes.

Roland Meertens: That’s interesting fun fact number one. That’s quite a long time ago, right?

Anthony Alford: Yes. I think, again, one of those stories that may not be true but is really good: supposedly Benjamin Franklin witnessed this and somebody remarked how useless it was and he said, “Well, what’s the use of a newborn baby?”

Roland Meertens: Yes, that’s one way to phrase it. Anyway, so then the other thing which I was wondering is then: when was the first parachute invented? And here, Wikipedia says that the modern parachute was invented in 1783. It’s the same year, right?

Anthony Alford: They jumped out of the balloon with it, I think, right?

Roland Meertens: Yes. And there was someone who jumped off a building, I guess. Also, I think at some point someone invented a frameless parachute and I think they brought it with them on a balloon trip in 1797. And then at some point they ditched the balloon, so they had to use a parachute or at least that’s what the person said, “Oh, the balloon had problems, so I had to use my parachute”. Maybe it was like sales talk, you don’t know.

Anthony Alford: Probably we should caveat that it was the first successful parachute incident.

Roland Meertens: Yes.

Anthony Alford: You know somebody tried it before and did not succeed.

Roland Meertens: Yes. The first time that someone tries a parachute is by jumping from a building, but then they invent these balloons so then they can jump from even higher. So in 1785, apparently someone demonstrated it as a means of safely disembarking from a hot air balloon. I have Blanchard’s and I read it on Wikipedia, his first parachute demonstrations were conducted with a dog as a passenger. Yes, sad.

Anthony Alford: Just like the Russians, right?

Roland Meertens: Yes.

Anthony Alford: Sending a dog into space, let the dog test out the parachute.

Roland Meertens: Anyways, the reason that I was looking at this was that I was reading a mechanics magazine from Saturday, September 25th, 1852, where George Cayley is talking about governable parachutes. So George Cayley has basically invented a glider here: it’s an airplane without any motor, so it is a glider, and he basically calls it a governable parachute. So I will also put this article in the show notes. It’s quite interesting to read. Gliders or airplanes were called governable parachutes before they were called gliders or airplanes.

Anthony Alford: They needed to workshop that one a little better.

Roland Meertens: I like it. If you don’t have an airplane, what would you call it? And you do have parachutes because they have been around for a long time. Parachutes are old news now.

So what if you can have a governable parachute? He also describes that to start this governable parachute, you can maybe hang it behind one of those air balloons. So his idea was to go up behind an air balloon and then release it and have his governable parachute be steered.

Anthony Alford: He didn’t have a fast enough motorboat.

Roland Meertens: He didn’t even have a motor. This is before the motor.

Anthony Alford: Had a steam train.

Roland Meertens: But as you say, motorboat: if you look at the illustration on the front page of this thing, he is sitting in what very much looks like a massive boat or a chariot. It looks more like a boat than a chariot.

Anthony Alford: Interesting.

Roland Meertens: Yes, it’s fun. Thank you so much for recording this podcast, Anthony. Thank you so much to the listeners for listening to this podcast. Please like it if your podcast platform has an option for this and please tell your friends all about it and make sure that they listen to this.

Anthony Alford: Thank you, Roland. It’s a pleasure as always.

Roland Meertens: Yes. Thank you very much for recording this with me, Anthony. I like it as always. The only last fun fact I still want to tell you is that I think in one episode last season, I was complaining about my Apple Watch and that it didn’t show the correct time.

I’m done with it. It was still showing the wrong time last week. I bought a Garmin watch. I now have the correct time. And instead of the battery lasting for half a day, even though the watch is relatively new, I now have, I don’t know, 17 days of battery life.

Anthony Alford: Wow.

Roland Meertens: I like it.

Anthony Alford: And it works with your iPhone?

Roland Meertens: It works with my iPhone, yes. It shows my messages whenever they come in.

Anthony Alford: I might look into that.

Roland Meertens: Yes, as I said, I was kind of fed up. Especially if I would cycle to work, then cycle somewhere else, cycle back to my home and then take a run. The watch would be dead. It can’t keep up with an active lifestyle somehow.

Anthony Alford: Ridiculous.

Roland Meertens: Now, last weekend I went on a hike. I walked 25 kilometers, had the map on the watch the whole time and it has only used a quarter of its charge or something. It’s crazy.

Mentioned:

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB director Dwight Merriman sells $671600 in stock – Investing.com

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Dwight A. Merriman, a director at MongoDB , Inc. (NASDAQ:MDB), has recently sold a total of $671,600 worth of the company’s Class A Common Stock. The transactions occurred on October 10 and October 15, with share prices ranging from $272.97 to $287.82. Following these sales, Merriman holds 1,130,006 shares directly and 89,063 shares indirectly through the Dwight A. Merriman Charitable Foundation. Additionally, 522,896 shares are held by The Dwight A. Merriman 2012 Trust for the benefit of his children. The sales were conducted under a Rule 10b5-1 trading plan.

In other recent news, MongoDB has been the focus of several positive analyst reviews following a strong second quarter performance. The company’s Q2 results showcased a 13% year-over-year revenue increase, totaling $478 million, largely driven by the success of its Atlas (NYSE:ATCO) and Enterprise Advanced offerings. MongoDB added more than 1,500 new customers during the quarter, bringing its total customer base to over 50,700. Looking ahead, MongoDB’s management anticipates Q3 revenue to be between $493 million to $497 million, with full fiscal year 2025 revenue projected to be between $1.92 billion to $1.93 billion.

Analysts from DA Davidson, Piper Sandler, KeyBanc Capital Markets, and Oppenheimer have all raised their price targets for MongoDB, reflecting the company’s robust performance. DA Davidson maintained its Buy rating on MongoDB and raised the price target to $340. Similarly, Piper Sandler confirmed its Overweight rating on MongoDB shares, maintaining a $335.00 price target. KeyBanc Capital Markets raised its price target on MongoDB to $330, maintaining an Overweight rating. Oppenheimer also maintained its Outperform rating on MongoDB, raising the price target to $350.

These adjustments and ratings reflect a positive stance on MongoDB’s prospects, indicating a belief that the company will continue to perform well over an extended period. Despite some challenges, these firms continue to support the stock for long-term investors.

While Dwight A. Merriman’s recent stock sales might raise eyebrows, it’s crucial to consider MongoDB’s broader financial picture. According to InvestingPro data, MongoDB boasts a market capitalization of $21.07 billion, reflecting its significant presence in the database software market. The company’s revenue for the last twelve months as of Q2 2023 stood at $1.82 billion, with a robust revenue growth of 22.37% over the same period.

InvestingPro Tips highlight that MongoDB holds more cash than debt on its balance sheet, indicating a strong financial position. This liquidity strength is further emphasized by another tip stating that the company’s liquid assets exceed short-term obligations. These factors suggest that despite the insider sale, MongoDB maintains a solid financial foundation.

It’s worth noting that while MongoDB is not currently profitable, with a negative P/E ratio of -93.97, analysts predict the company will turn profitable this year. This optimism is reflected in the fact that 22 analysts have revised their earnings upwards for the upcoming period, as pointed out by another InvestingPro Tip.

For investors seeking a more comprehensive analysis, InvestingPro offers 11 additional tips for MongoDB, providing a deeper understanding of the company’s financial health and market position.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Challenges and Lessons Porting Code from C to Rust

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

In a two-installment series, Stephen Crane and Khyber Sen, software engineers at Immunant, recount how they ported VideoLAN and FFmpeg AV1 decoder from C to Rust for the Internet Security Research Group (ISRG). The series includes plenty of detail about how they ensured not to break things and optimized performance.

The AV1 decoder used in VideoLan VLC and FFMpeg, dav1d, has been under development for over six years and contains about 50k lines of C code and 250k lines of assembly. As Crane notes, it is mature, fast, and widely used. It is also highly optimized to be small, portable and very fast. This strongly suggested to port it instead of rewriting it from scratch in Rust.

The first choice engineers at Immunant had to make was whether to do the porting step by step or transpiling the entire codebase using c2rust to get an unsafe but runnable implementation in Rust, then refactor and rewrite it to make it safe and idiomatic. They decided to go the c2rust route because of two major advantages: the possibility to test the ported code while refactoring it and the reduced need for expert domain knowledge it required.

We found that full CI testing from the beginning while rewriting and improving the Rust code was immensely beneficial. We could make cross-cutting changes to the codebase and run the existing dav1d tests on every commit. […] The majority of the team on this project were experts in systems programming and Rust, but did not have previous experience with AV codecs. Our codec expert on the project, Frank Bossen, provided invaluable guidance but did not need to be involved in the bulk of the effort.

The task of refactoring the ported Rust code into safe, idiomatic Rust was encumbered by several challenges, some related to mismatches between C and Rust, such as with lifetime management (borrowing), memory ownership, buffer pointers, and unions; others arising from dav1d design, strongly relying on shared mutable access across threads.

Thread safety issues related to shared state were addressed through locks using Mutex and RwLock and validating at runtime that a thread could access data without introducing delays with Mutex::try_lock() and RwLock::try_read() / RwLock::try_write().

This approach was just fine to handle the cases where only a single thread needed to mutate a value shared across thread. However, dav1d also relies on concurrent access to a single buffer from multiple threads where each thread accesses a specific subrange of the buffer. Instead of using the more idiomatic Rust approach based on using disjoint slices exclusively assigned to different threads, Immunant engineers resorted to creating a buffer wrapper type, DisjointMut, responsible for handling mutable borrows and ensuring each of them has exclusive access.

Two other challenging areas were self-referential structures, mostly used for cursors tracking buffer positions and links between context structures, and untagged unions. Since Rust does not allow to have self-referential structures, cursor pointers were replaced by integer indices while context structures were unlinked and referenced through function parameters. Untagged unions were converted into tagged Rust unions were convenient, while in other cases the zerocopy crate helped reinterpret the same bytes as two different types at runtime to avoid changing the union representation and size.

A major goal of the porting was to preserve performance, so Immunant engineers took care to monitor performance regression throughout the refactoring stage for each commit. As they progressed in the transition to safe code, they realized performance was mostly impacted by rather subtle factors such as the cost of dynamic dispatch to assembly code, bounds checking, and structure initialization. Finally, they dealt with finer optimizations related to branching, inlining, and stack usage.

The work on performance optimization brought a significant reduction in the overhead introduced by the porting, which went down to 6% from 11%. Overall, the process of porting dav1d to rav1d took over 20 person-months with a team of 3 developers and required more manual effort than it was initially foreseen, says Crane, but it showed it is possible to rewrite existing C code into safe, performant Rust and solve all threading and borrowing challenges.

For applications where safety is paramount, rav1d offers a memory safe implementation without additional overhead from mitigations such as sandboxing. We believe that with continued optimization and improvements, the Rust implementation can compete favorably with a C implementation in all situations, while also providing memory safety.

There is much more to learn from the process that led to the creation of rav1d than can be covered here, so do not miss the original write-up for the full detail.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Global NoSQL Market Size Reach USD 86.3 Billion by 2032, Growing with 28.1% of CAGR

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

NoSQL Market Share The key factors that drive the growth of the Not only SQL market include the rise in demand for big data analytics. WILMINGTON, DE, UNITED STATES, October 16, 2024 /EINPresswire.com…

“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute
irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.”

Section 1.10.32 of “de Finibus Bonorum et Malorum”, written by Cicero
in 45 BC

“Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium
doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore
veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam
voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur
magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est,
qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non
numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat
voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis
suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum
iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur,
vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?”

1914 translation by H. Rackham

“But I must explain to you how all this mistaken idea of denouncing pleasure and
praising pain was born and I will give you a complete account of the system, and
expound the actual teachings of the great explorer of the truth, the master-builder
of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it
is pleasure, but because those who do not know how to pursue pleasure rationally
encounter consequences that are extremely painful. Nor again is there anyone who
loves or pursues or desires to obtain pain of itself, because it is pain, but
because occasionally circumstances occur in which toil and pain can procure him some
great pleasure. To take a trivial example, which of us ever undertakes laborious
physical exercise, except to obtain some advantage from it? But who has any right to
find fault with a man who chooses to enjoy a pleasure that has no annoying
consequences, or one who avoids a pain that produces no resultant pleasure?”

1914 translation by H. Rackham

“But I must explain to you how all this mistaken idea of denouncing pleasure and
praising pain was born and I will give you a complete account of the system, and
expound the actual teachings of the great explorer of the truth, the master-builder
of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it
is pleasure, but because those who do not know how to pursue pleasure rationally
encounter consequences that are extremely painful. Nor again is there anyone who
loves or pursues or desires to obtain pain of itself, because it is pain, but
because occasionally circumstances occur in which toil and pain can procure him some
great pleasure. To take a trivial example, which of us ever undertakes laborious
physical exercise, except to obtain some advantage from it? But who has any right to
find fault with a man who chooses to enjoy a pleasure that has no annoying
consequences, or one who avoids a pain that produces no resultant pleasure?”

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Copilot Now Available in OneDrive: AI-Powered Features for Streamlined Document Management

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Microsoft launched Copilot in OneDrive for commercial users, enhancing the platform with AI-powered tools designed to improve document management and productivity. This new feature set allows users to generate summaries, compare documents, retrieve information, and create content more efficiently, thereby reducing the time spent on manual tasks.

Copilot provides several functionalities designed to simplify users’ interaction with their documents. One feature allows users to generate summaries of lengthy documents, enabling them to quickly understand the main points without reading through the entire text. Users can summarize single documents or analyze up to five files simultaneously.

There has been a doubt regarding Copilot’s ability to summarize PDF files. User miketheinsurancecoach asked

How long do you think it will be before Copilot can summarize or interrogate PDF files instead of just Microsoft files like Word and Excel?

In response, Scott Brant, a Microsoft MVP,  confirmed:

The great news is that it can do that today. I just did not have time to include it in the video, but you can summarize PDFs and ask questions about PDFs. I just tried this with a 50-page PDF in my OneDrive, and it summarized and gave me the recommendations from the PDF in about 15 seconds too.

Another aspect of Copilot is its ability to compare multiple documents. Users can highlight key differences between files, such as contracts or reports, without opening them individually.

In addition to summarization and comparison, Copilot offers advanced information retrieval capabilities. Users can ask Copilot complex questions regarding the content of their OneDrive files. For instance, if a user needs specific data from several documents, Copilot can extract relevant information across those files, acting as a centralized resource for insights.

Moreover, Copilot can assist users in generating content for new documents. By selecting relevant files, users can prompt Copilot to suggest outlines, ideas, or even draft text for various applications, such as sales proposals or project plans.

Users can activate Copilot by hovering over a supported file and selecting the Copilot option from the menu. Currently, Copilot is available only to commercial users through the OneDrive web interface. Users need to sign in with their Microsoft work or school accounts to access these functionalities. Comprehensive guides and FAQs are provided to help users familiarize themselves with Copilot’s features.

Early feedback from the community has been positive, with many users expressing their enthusiasm for the new features. One of them wrote the following words below the official announcement post: 

Brilliant. Being able to access my SharePoint and Teams files in OneDrive means I can use Copilot on those too. Excellent.

For further details on using Copilot, users can refer to the official documentation, which covers various features and provides tips for efficient use.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Global NoSQL Market Size Reach USD 86.3 Billion by 2032, Growing with 28.1% of CAGR

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

NoSQL Market Share

NoSQL Market Share

The key factors that drive the growth of the Not only SQL market include the rise in demand for big data analytics.

WILMINGTON, DE, UNITED STATES, October 16, 2024 /EINPresswire.com/ — According to the report published by Allied Market Research, The Global NoSQL Market Size Reach USD 86.3 Billion by 2032, Growing with 28.1% of CAGR . The report provides an extensive analysis of changing market dynamics, major segments, value chain, competitive scenario, and regional landscape. This research offers valuable able guidance to leading players, investors, shareholders, and startups in devising strategies for sustainable growth and gaining a competitive edge in the market.

The global NoSQL market was valued at USD 7.3 billion in 2022, and is projected to reach USD 86.3 billion by 2032, growing at a CAGR of 28.1% from 2023 to 2032.

𝐑𝐞𝐪𝐮𝐞𝐬𝐭 𝐒𝐚𝐦𝐩𝐥𝐞 𝐑𝐞𝐩𝐨𝐫𝐭 (𝐆𝐞𝐭 𝐅𝐮𝐥𝐥 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐢𝐧 𝐏𝐃𝐅 – 350 𝐏𝐚𝐠𝐞𝐬) 𝐚𝐭:https://www.alliedmarketresearch.com/request-sample/640

The rise in demand for big data analytics, enterprise-wide need for scalable and flexible database solutions, and growth in adoption of cloud computing technology are expected to drive the global NoSQL market growth. However, the high complexities of administrating NoSQL databases and the potential threat of data-related inconsistencies are expected to hinder market growth. Furthermore, the rise in adoption of advanced technologies such as AI & ML offers lucrative market opportunities for the market players.

The NoSQL market is segmented on the basis of type, application, industry vertical, and region. On the basis of type, it is categorized into key-value store, document database, column-based store, and graph database. On the basis of application, it is divided into data storage, mobile apps, data analytics, web apps, and others. The data storage segment is further sub-segmented into distributed data depository, cache memory, and metadata store. On the basis of industry vertical, it is categorized into retail, gaming, IT, and others. On the basis of region, the market is analyzed across North America, Europe, Asia-Pacific, and LAMEA.

𝐈𝐟 𝐲𝐨𝐮 𝐡𝐚𝐯𝐞 𝐚𝐧𝐲 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬, 𝐏𝐥𝐞𝐚𝐬𝐞 𝐟𝐞𝐞𝐥 𝐟𝐫𝐞𝐞 𝐭𝐨 𝐜𝐨𝐧𝐭𝐚𝐜𝐭 𝐨𝐮𝐫 𝐚𝐧𝐚𝐥𝐲𝐬𝐭 𝐚𝐭: https://www.alliedmarketresearch.com/connect-to-analyst/640

On the basis of type, the key-value store segment held the highest market share in 2022, accounting for less than two-fifths of the NoSQL market revenue, and is estimated to maintain its leadership status throughout the forecast period. This is attributed to the high scalability and the ability to support multiple data models on a single database with faster access would continue driving its application. However, the document database segment is projected to manifest the highest CAGR of 29.0% from 2023 to 2032, as these database services help to reduce the time and costs associated with optimizing systems in the initial phase of deployment.

On the basis of application, the web apps segment accounted for the largest share in 2022, contributing to more than one-fourth of the NoSQL market revenue, owing to growth in the usage of website-based solutions in several industries. However, the mobile apps segment is expected to portray the largest CAGR of 31% from 2023 to 2032 and is projected to maintain its lead position during the forecast period. It provides several advantages such as reducing costs, supporting business, and effectively controlling the business environment in the organization.

𝐄𝐧𝐪𝐮𝐢𝐫𝐲 𝐁𝐞𝐟𝐨𝐫𝐞 𝐁𝐮𝐲𝐢𝐧𝐠: : https://www.alliedmarketresearch.com/purchase-enquiry/640

On the basis of region, the North America segment held the highest market share in terms of revenue in 2022, accounting for less than two-fifths of the NoSQL market revenue. The increase in the usage of NoSQL solutions in businesses to improve businesses and the customer experience is anticipated to propel the growth of the market in this region. However, the Asia-Pacific segment is projected to manifest the highest CAGR of 26.8% from 2023 to 2032. Countries such as China, India, and South Korea are at the forefront, embracing digital technologies to enhance their effectiveness and competitiveness, further expected to contribute to the growth of the market in this region.

The report analyzes the profiles of key players operating in the NoSQL market such as Aerospike Inc., Couchbase Inc., IBM Corporation, Neo4j, Inc., Objectivity, Inc, Oracle Corporation, Progress Software Corporation, Riak, ScyllaDB, Inc. and Apache Software Foundation. These players have adopted various strategies such as collaboration, acquisition, and product launch to increase their market penetration and strengthen their position in the NoSQL market.

𝐁𝐮𝐲 𝐍𝐨𝐰 & 𝐆𝐞𝐭 𝐔𝐩𝐭𝐨 𝟓𝟎% 𝐃𝐢𝐬𝐜𝐨𝐮𝐧𝐭 𝐨𝐧 𝐭𝐡𝐢𝐬 𝐑𝐞𝐩𝐨𝐫𝐭 (350 𝐏𝐚𝐠𝐞𝐬 𝐏𝐃𝐅 𝐰𝐢𝐭𝐡 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬, 𝐂𝐡𝐚𝐫𝐭𝐬, 𝐓𝐚𝐛𝐥𝐞𝐬, 𝐚𝐧𝐝 𝐅𝐢𝐠𝐮𝐫𝐞𝐬) 𝐚𝐭:
https://www.alliedmarketresearch.com/NoSQL-market/purchase-options

COVID-19 Scenario

● The NoSQL market witnessed stable growth during the COVID-19 pandemic, owing to the dramatically increased dependence on digital devices. The surge in online presence of people during the period of COVID-19 induced lockdowns and social distancing policies fueled the need for NoSQL solutions.

● In addition, with the majority of the population confined in homes during the early stages of the COVID-19 pandemic, businesses needed to optimize their business operations and offerings to maximize their revenue opportunities while optimizing their operations to support the rapidly evolving business environment, post the outbreak of the COVID-19 pandemic.

Thanks for reading this article you can also get individual chapter-wise sections or region-wise report versions like North America Europe or Asia.

If you have any special requirements, please let us know and we will offer you the report as per your requirements.

Lastly this report provides market intelligence most comprehensively. The report structure has been kept such that it offers maximum business value. It provides critical insights into the market dynamics and will enable strategic decision-making for the existing market players as well as those willing to enter the market.

Similar Reports:

1. NoSQL Databases Market : https://www.alliedmarketresearch.com/nosql-databases-market-A191357

2. Canada NoSQL Market : https://www.alliedmarketresearch.com/canada-nosql-market-A311091

About Us:

Allied Market Research (AMR) is a market research and business-consulting firm of Allied Analytics LLP, based in Portland, Oregon. AMR offers market research reports, business solutions, consulting services, and insights on markets across 11 industry verticals. Adopting extensive research methodologies, AMR is instrumental in helping its clients to make strategic business decisions and achieve sustainable growth in their market domains. We are equipped with skilled analysts and experts and have a wide experience of working with many Fortune 500 companies and small & medium enterprises.

Pawan Kumar, the CEO of Allied Market Research, is leading the organization toward providing high-quality data and insights. We are in professional corporate relations with various companies. This helps us dig out market data that helps us generate accurate research data tables and confirm utmost accuracy in our market forecasting. Every data company in the domain is concerned. Our secondary data procurement methodology includes deep presented in the reports published by us is extracted through primary interviews with top officials from leading online and offline research and discussion with knowledgeable professionals and analysts in the industry.

David Correa
Allied Market Research
+1 800-792-5285
email us here
Visit us on social media:
Facebook
X

Legal Disclaimer:

EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DataStax merges its data stack with Nvidia’s development tools to simplify AI … – SiliconANGLE

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

The database company DataStax Inc. is teaming up with Nvidia Corp. as it strives to become the data platform of choice for enterprises’ artificial intelligence initiatives.

In an announcement today, the company said it’s integrating its AI capabilities with the Nvidia AI Enterprise platform. The company claims the new integrated tools offering, dubbed the “DataStax AI Platform, Built with Nvidia AI,” can reduce development time of AI applications that leverage proprietary data by up to 60% in some cases. It provides everything developers need to fine-tune their models and improve the accuracy of their responses.

DataStax said it’s offering a complete solution for AI that covers everything from data ingestion and retrieval to application development and deployment, together with continuous training.

What’s in the stack?

The key components include DataStax’s Langflow platform, which provides an open-source visual framework for building retrieval-augmented generation or RAG applications. The DataStax Langflow platform was launched earlier this year, after DataStax acquired the creator of the open-source Langflow project, called Logspace.

DataStax also supplies its integrated Data Management tools, which encompass its flagship NoSQL database AstraDB with integrated vector search, hybrid search, knowledge graph, RAG, real-time analytics, streaming and other capabilities. DataStax became one of the first traditional database companies to add vector search functionality last year, enabling unstructured data to be stored as vector embeddings for easier retrieval by large language models.

With that update, it paved the way for DataStax’s RAGStack offering, which is an “out-of-the-box RAG solution.” RAG is a key technique used in AI development that makes it possible to provide additional context to LLMs from outside data sources. It allows models to deliver more accurate query responses, improving the performance of generative AI applications.

DataStax said AI demands extremely diverse kinds of data, and so an integrated platform that provides access to it all is preferable to bolting on different tools for vector search, knowledge graphs and so on.

Meanwhile, the Nvidia AI Enterprise platform adds a host of other interesting capabilities for AI developers, including Nvidia’s NeMo Retriever tool, which makes it easy to connect individual LLMs to very specific datasets, and NeMo Curator, a data curation tool for building large datasets for pre-training and fine-tuning models.

Other components provided by Nvidia include the NeMo Customizer, which is a performant and scalable microservice that helps simplify model fine-tuning and alignment for domain-specific applications. The NeMo Evaluator aids development by automating the evaluation process to test the accuracy of fine-tuned AI applications, while NeMo Guardrails makes it possible to add safeguards and prevent toxic or biased outputs.

Nvidia AI Enterprise also integrates multimodal PDF data extraction capabilities, providing a blueprint for ingesting unstructured data from PDF files, and NIM Agent Blueprints, which is a catalog of pre-trained and customizable AI workflows for creating and deploying AI applications.

The complete package

The DataStax AI Platform, Built with Nvidia, looks to be the complete package for AI developers, and companies will be hard-pressed to find a more comprehensive platform for building and deploying their AI models. Whether or not it’s the best platform of its kind remains to be seen, but DataStax is boosting its chances of success by making it as flexible as possible.

Enterprises can deploy the platform on any of the major public cloud platforms – Amazon Web Services, Microsoft Azure or Google Cloud – as well as on-premises environments, the company said. That last option makes it especially useful for enterprises in heavily regulated industries, such as insurance, finance and healthcare, the company said.

The integration makes sense because a lot of customers are using both platforms anyway, the company added. It explained that one of the problems enterprises face when bolting together various disparate tools for AI is that things have a habit of breaking down. For instance, the online travel agency Priceline.com LLC was already using DataStax’s AI offerings in combination with Nvidia’s NeMo tools, and it was spending a lot of time on trying to make everything work smoothly.

“It will greatly reduce AI development time,” said Priceline Chief Technology Officer Angela McArthur. “Having them integrated will greatly reduce the complexity for companies like us.”

Constellation Research Inc. analyst Holger Mueller said the integrated offering is interesting because it brings together Nvidia’s proven infrastructure with a reliable platform-as-a-service vendor in DataStax.

“The partnership makes it clear that Nvidia has ambitions in software too and it will help the company in that regard,” the analyst said. “It makes it much easier for joint customers to feed their data into Nvidia’s software and hardware and get their generative AI apps up and running. Some companies might be concerned about the dependencies they’re entering through this partnership, but most won’t worry as they just want to build their first, AI-powered applications.”

DataStax says the integrated platform will also provide more accuracy, giving developers more dynamic control over the data they feed into each AI application so they can improve their responses.

That’s especially important because companies are increasingly trying to use generative AI to improve productivity, with things such as PDF-driven chatbots for customer service and AI-powered analytics tools for surfacing business insights.

“The companies we’re talking to see these use cases as laying the groundwork for what they really want to do,” said DataStax Chief Executive Chet Kapoor. “They want to build ‘transformational’ AI projects that fundamentally transform how they operate and optimize for their customers.”

Image: SiliconANGLE/Microsoft Designer

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Director Sells $272,970.00 in Stock – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction dated Thursday, October 10th. The stock was sold at an average price of $272.97, for a total transaction of $272,970.00. Following the completion of the sale, the director now directly owns 1,130,006 shares in the company, valued at $308,457,737.82. This trade represents a 0.00 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website.

MongoDB Stock Down 1.5 %

MDB traded down $4.48 during trading on Tuesday, hitting $284.66. The stock had a trading volume of 679,636 shares, compared to its average volume of 1,448,085. The company has a market capitalization of $20.88 billion, a PE ratio of -102.90 and a beta of 1.15. MongoDB, Inc. has a twelve month low of $212.74 and a twelve month high of $509.62. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. The stock has a 50 day simple moving average of $266.49 and a two-hundred day simple moving average of $286.66.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Thursday, August 29th. The company reported $0.70 EPS for the quarter, topping the consensus estimate of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The business had revenue of $478.11 million during the quarter, compared to the consensus estimate of $465.03 million. During the same period in the previous year, the firm posted ($0.63) earnings per share. The business’s revenue for the quarter was up 12.8% compared to the same quarter last year. On average, research analysts predict that MongoDB, Inc. will post -2.44 earnings per share for the current year.

Institutional Investors Weigh In On MongoDB

Several institutional investors and hedge funds have recently made changes to their positions in MDB. Vanguard Group Inc. raised its holdings in shares of MongoDB by 1.0% during the first quarter. Vanguard Group Inc. now owns 6,910,761 shares of the company’s stock worth $2,478,475,000 after purchasing an additional 68,348 shares during the last quarter. Jennison Associates LLC raised its holdings in shares of MongoDB by 14.3% during the first quarter. Jennison Associates LLC now owns 4,408,424 shares of the company’s stock worth $1,581,037,000 after purchasing an additional 551,567 shares during the last quarter. Swedbank AB raised its holdings in shares of MongoDB by 156.3% during the second quarter. Swedbank AB now owns 656,993 shares of the company’s stock worth $164,222,000 after purchasing an additional 400,705 shares during the last quarter. Champlain Investment Partners LLC raised its holdings in shares of MongoDB by 22.4% during the first quarter. Champlain Investment Partners LLC now owns 550,684 shares of the company’s stock worth $197,497,000 after purchasing an additional 100,725 shares during the last quarter. Finally, Clearbridge Investments LLC raised its holdings in shares of MongoDB by 109.0% during the first quarter. Clearbridge Investments LLC now owns 445,084 shares of the company’s stock worth $159,625,000 after purchasing an additional 232,101 shares during the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

Wall Street Analyst Weigh In

A number of equities analysts have recently commented on MDB shares. Sanford C. Bernstein raised their target price on shares of MongoDB from $358.00 to $360.00 and gave the stock an “outperform” rating in a research report on Friday, August 30th. Mizuho raised their target price on shares of MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research report on Friday, August 30th. Truist Financial raised their target price on shares of MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Stifel Nicolaus raised their target price on shares of MongoDB from $300.00 to $325.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Finally, Wells Fargo & Company raised their target price on shares of MongoDB from $300.00 to $350.00 and gave the stock an “overweight” rating in a research report on Friday, August 30th. One equities research analyst has rated the stock with a sell rating, five have issued a hold rating and twenty have assigned a buy rating to the company. According to MarketBeat, the stock presently has an average rating of “Moderate Buy” and an average target price of $337.96.

View Our Latest Stock Report on MongoDB

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Insider Buying and Selling by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Growth stocks offer a lot of bang for your buck, and we’ve got the next upcoming superstars to strongly consider for your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.