Month: November 2023
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Open source platform easily moves unlimited size data from popular databases to choice of 68 destinations including vector databases
SAN FRANCISCO, November 28, 2023–(BUSINESS WIRE)–Airbyte, creators of the fastest-growing open-source data integration platform, today announced availability of certified connectors for MongoDB, MySQL, and PostgreSQL databases, enabling datasets of unlimited size to be moved to any of Airbyte’s 68 supported destinations that include major cloud platforms (Amazon Web Services, Azure, Google), Databricks, Snowflake, and vector databases (Chroma, Milvus, Pinecone, Qdrant, Weaviate) which then can be accessed by artificial intelligence (AI) models.
Certified connectors (maintained and supported by Airbyte) are now available for both Airbyte Cloud and Airbyte Open Source Software (OSS) versions. The Airbyte connector catalog is the largest in the industry with more than 370 certified and community connectors. Also, users have built and are running more than 2,000 custom connectors created with the No-Code Connector Builder, which makes the construction and ongoing maintenance of Airbyte connectors much easier and faster.
“This makes a treasure trove of data available in these popular databases – MongoDB, MySQL, and Postgres – available to vector databases and AI applications,” said Michel Tricot, co-founder and CEO, Airbyte. “There are no limits on the amount of data that can be replicated to another destination with our certified connectors.”
Coming off the most recent Airbyte Hacktoberfest last month, there are now more than 20 Quickstart guides created by members of the user community, which provide step-by-step instructions and easy setup for different data movement use cases. For example, there are six for PostgreSQL related to moving data to Snowflake, BigQuery, and others. In addition, the community made 67 improvements to connectors that include migrations to no-code, which facilitates maintenance and upgrades.
Airbyte’s platform offers the following benefits.
-
The largest catalog of data sources that can be connected within minutes, and optimized for performance.
-
Availability of the no-code connector builder that makes it possible to easily and quickly create new connectors for data integrations that addresses the “long-tail” of data sources.
-
Ability to do incremental syncs to only extract changes in the data from a previous sync.
-
Built-in resiliency in the event of a disrupted session moving data, so the connection will resume from the point of the disruption.
-
Secure authentication for data access.
-
Ability to schedule and monitor status of all syncs.
Airbyte makes moving data easy and affordable across almost any source and destination, helping enterprises provide their users with access to the right data for analysis and decision-making. Airbyte has the largest data engineering contributor community – with more than 800 contributors – and the best tooling to build and maintain connectors.
Airbyte Open Source and connectors are free to use. Airbyte Cloud cost is based on usage with a pricing estimator here. To learn more about Airbyte and its capabilities, visit the Airbyte website.
About Airbyte
Airbyte is the open-source data movement leader running in the safety of your cloud and syncing data from applications, APIs, and databases to data warehouses, lakes, and other destinations. Airbyte offers four products: Airbyte Open Source, Airbyte Enterprise, Airbyte Cloud, and Powered by Airbyte. Airbyte was co-founded by Michel Tricot (former director of engineering and head of integrations at Liveramp and RideOS) and John Lafleur (serial entrepreneur of dev tools and B2B). The company is headquartered in San Francisco with a distributed team around the world. To learn more, visit airbyte.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20231128022919/en/
Contacts
Joe Eckert for Airbyte
Eckert Communications
jeckert@eckertcomms.com
Article originally posted on mongodb google news. Visit mongodb google news
Presentation: Securing the Software Supply Chain: How in-toto and TUF Work Together to Combat Supply Chain Attacks
MMS • Marina Moore
Article originally posted on InfoQ. Visit InfoQ
Transcript
Moore: My name is Marina. I’m a PhD candidate at NYU. We’re really focused on this space combining some research ideas with actual practical implementations that can be used. That’s what we’re going to talk about. There’s some theory, but mostly this is about how we actually use this stuff in practice. I’m going to focus on two tools. The overall idea is really how to combine these tools into a secure software supply chain.
What Are Software Supply Chain Attacks?
Software supply chain attacks, what are we actually talking about when we talk about these? First, we have a software supply chain. In order to attack something, you’d have to first define what it is that’s being attacked. This is one definition from Purdue where it defines the software supply chain as a collection of systems, devices, and people, which produce a final software product. This is basically everything that happens between when some developers write some code and when that code is actually run in a production system. An attack on the software supply chain is when one or more weaknesses in the components of the software supply chain are compromised to introduce alterations into the final software product. This is when anything happens in that chain that’s unexpected, or that changes some stuff in some way, especially in a way that can cause the final product to be vulnerable, maybe have some arbitrary software in it. These attacks are very common. From the 2022 Sonatype report, over the past 3 years, we’ve seen an increase of 700 or so percent in the number of supply chain attacks that are seen in the wild. This is a real problem. It’s happening. Here’s a few examples, they happen on all different pieces of the supply chain, like package managers, they happen in the source code area, in updates and distribution. All over the place. The CNCF, the Cloud Native Computing Foundation has a great catalog of a bunch of these different types of attacks that puts them in different categories. It’s not every attack that ever happens, but it’s a nice overview for folks new to this space, if you’re interested in learning more about what attacks are happening. This is just the ones that are on open source projects, which means they’re publishable and easy to find. It’s a great database.
Solutions
There are a lot of solutions that have been proposed in this space. I think we’ve heard about some of them. Because it’s such a big growing problem, I think there’s a lot of work happening. There’s a lot of good work but it all solves different pieces of the problem. In some ways, as in everything in cybersecurity, if you only solve one piece of the problem, the attackers just move to the place you didn’t solve. You really have to cohesively think about the system. That’s what we’re going to try and do. We’re going to broadly categorize these solutions in three different areas and then talk about how these come together. The first is evidence gathering. This is looking at what’s happening in the supply chain, gathering evidence about what should be going on. Information discovering is just looking at this evidence and trying to learn stuff about what’s happening in the system. Then, finally, we have policy and validation. This is, in some ways, the most important one, where you not only say we have a bunch of metadata, we have a bunch of information about what’s happening in the supply chain, but we also want to make sure that x happened and that y performed z, the exact stuff that needs to happen.
Common Link: in-toto
The first project we’re going to talk about is this project called in-toto. We like to think of it as a common link in the supply chain where you can really tie together a bunch of what we call point solutions, solutions that solve one piece or another in a more cohesive way. It’s actually implemented by other projects as a common data format even to communicate these different things. First of all, we have the evidence gathering. You have things like SLSA, as well as the various SBOM formats, the software bill of material format CycloneDX, SPDX. All of these things provide some metadata that provides information about stuff that happened at some point. An SBOM is the dependencies that you’re pulling in. SLSA talks about the information, like what happened at the build step, and so on. This is the evidence gathering piece. You would put that information into in-toto, in what we call in-toto links. It’s just a common format for the information. It’s the same information just transposed.
Then you can send that information back to information discovery systems. These are systems like Sigstore, which allows for the Rekor transparency log which has a big graph information, which is queryable. Things like GUAC, which is a project that looks at visualizing stuff that’s happening in your software supply chain by taking in lots of different types of metadata, as well as other projects in this space. Then, finally, we have the policy and validation, what in-toto, we call the layout. It lays out the steps, which can easily be thought of as policy as well. You can take all this information. Then you have what we call a supply chain orchestrator who decides on some policy that should actually be happening in the supply chain, writes up this policy. You could use this in any existing policy engine, any admission controller, anywhere where you’re pulling stuff in. You then define what should be happening. You can then compare this to what actually happens in those different links, a couple steps ago, which I think I have in this step. You have these links. You have the stuff that actually happened in your supply chain. You have the layout, which is the stuff that you want to happen. Then as an aside, you have some analysis of what’s happening and what could be done better, which you can use to iterate on the layout. When you put that together with the image, you can get this attested final product. Each of those different steps in the supply chain, they contain not just the steps that happened, but the outputs of the steps. That’s a cryptographic hash, which is then signed by the step, which means that you can actually look that the output of one step is the same as the input to the next step. You can make sure that no tampering happened in between these steps. In the layout, you can enforce that you check these actors did stuff here, and the output from that was then inputted here, and so on.
What’s Missing: Distribution
One missing piece in this picture that I quickly summarized is the problem of distribution. You need to actually distribute these three things, the package, the policy, and the attestations to the user. Most importantly, you have to distribute this policy. If an attacker is able to compromise the policy of what should be done in the supply chain, then they’re able to compromise any piece of the supply chain, by just changing that policy.
How Do We Distribute in-toto Metadata?
That’s going to lead us into the next project that I’m going to talk about a bit. First, I’m going to talk a bit about the properties we need from secure distribution of in-toto metadata. We need to make sure that this information is timely. We talked a bit before about how this policy or this layout can iterate over time. Maybe you’re iteratively improving the security of your pipeline. You want to make sure that even if a policy was valid today, and then you change your process from policy A to policy B, you want to make sure that people in the future will only validate against policy B, even though you previously signed policy A. You also have to make sure that these policies are coming from trusted users. This is especially important because the policy actually defines the users that will be trusted for the different steps of the supply chain. You can build trust, but you have to start with some point of trust. Finally, this has to be compromise resilient. If this becomes the single point of failure and the place to attack in a software supply chain system, then that will happen. We’ve seen really large motivated attackers in this space. I think the SUNBURST SolarWinds attack is a great example there. You have to make sure that even if one thing goes wrong, you can either recover or prevent that from causing a full breakdown of the security properties.
The Update Framework (TUF)
This is where we’re going to come into this project called The Update Framework, or TUF. This is a framework for secure software updates, and really for secure distribution, which is applied to updates. That really was built with this idea of compromise resilience, and revocation really built in from the ground up. It assumes that repositories that host software as well as keys or developer accounts can and will be compromised, and so it provides means to both mitigate the impact of the compromise and allow for secure recovery after a compromise takes place. To provide a graceful degradation of security, the more stuff that’s compromised. Obviously, if your whole system is compromised, stuff can still go wrong, but each individual component has a minimized impact.
TUF Design Principles
It does so through the use of four design principles. These are responsibility separation, multi-signature trust, explicit and implicit revocation, and minimizing individual key and role risk. To start, we have this idea of responsibility separation, which comes back to this idea of delegation. I think this was a big thing in the keynote as well, this idea that you start with a root of trust, and you delegate down from there. TUF uses that property to divide responsibilities into these different roles, delegated from a root of trust. By minimizing the scope of the root of trust itself, passing most of the day-to-day use down, you can actually utilize any keys involved in the root of trust less often, which means they can be more securely used. It allows for bigger hardening because you’re using this less. The more something is used, the less power it’s given. For example, two of the different roles in TUF provide content attestations, or information about the actual content of a package or a software update. While a different role in TUF provides information about timeliness, to make sure that you’re using the current version of a package or the current version of a policy, as we were talking about before, by providing this kind of time key. I’ll show you the exact mechanisms and how we apply those. We basically separate the different roles, and using this idea of delegations. We have one role, it’s responsible for the integrity of packages, but that role can further delegate. I specifically want the alpha projects to be signed by Bob, and the prod projects be signed by Charlie. If Bob signs alpha, everything is fine, but if he signs prod, then that will be rejected, because he’s specifically trusted for alpha. It’s minimizing the scope of a compromise of Bob’s key.
Next, we have minimizing individual key and role risk. This goes back to that idea of more or less protected keys. For example, the root is a really high impact role, which means we need highly secure keys, which means these keys will be harder to use. If you have the root role require multiple signatures from multiple well-secured keys, say these are YubiKeys that are stored in a lockbox somewhere. I think for one project that uses TUF, we have five trusted root signers requiring a threshold of three of them to sign it, and they’re distributed across different continents. These keys are stored in safe places. It basically requires a whole Ocean’s Eleven movie to actually compromise this root of trust. It also means that it requires a week of planning to actually do a signing of it with these people across three continents. Versus, you have lower impact roles which you can sign with online keys, which are much easier to use. You can do on-demand signing. You can change things every day, or every hour. By necessity, these keys are less secure, because you can’t have five people across continents pushing a button every minute, if you need something to change every minute. By creating this set of delegation, so you start at the top, with the highly secured, hard to use, but very secure keys, all the way down to these online, really easy to use, but slightly less secure because if a server is compromised, all the online keys on that server will be compromised alongside it.
Next, we have the principle of multi-signature trust. This is just the idea that you require multiple signatures on a particular role or a particular piece of metadata, so that it’s not just one key that has to be compromised, it’s like two or five or whatever the numbers. One key compromised isn’t enough. Finally, we have explicit and implicit revocation. Implicit revocation is just timeouts. That’s pretty straightforward. If stuff expires, then it’s implicitly revoked. The explicit revocation was really a key design principle of TUF that ensures that anyone higher in the delegation chain can explicitly revoke anything lower in the delegation chain. If there’s online keys which are used very frequently, happen to be compromised, anything above them in the delegation chain can immediately revoke it and that will be seen by everybody because of this timeliness property of TUF. Again, I’ll explain how. This is the why portion of the talk.
The Targets, Snapshot, and Roots Roles
Now we’re going to get into the how. That’s a good transition. We’re going to build the actual architecture of TUF starting from the packages. If you look at the far right of the slide, you see the actual packages that you’re distributing. These don’t have to be packages. They can really be anything you want to securely distribute. For now, we’ll talk about the use case of, you have some built packages, and you’re going to get them to some end users. You have three packages that you’re trying to distribute. The first thing we’re going to add is targets metadata. Targets role in TUF is responsible for the integrity of files. This is I think the classic image signing, the first thing you think of when you think of, how do you securely distribute something? You have someone sign it. This targets real signed stuff, but not just the targets role itself, but the roles that it delegates to side stuff. You can have an offline, well-secured targets role that says, all A packages are trusted in this direction, all B and C packages are trusted over here. Then the B, C role actually signs those packages. Then, for some reason, A also has a further delegation. This could go on. Especially if you have a big organization, you don’t have to share keys across the organization, you can just say, ok, this team has this key, they’ll use that. This team over on this other part of the org has a different key, and prevents key sharing across these different people.
Next, we’re going to add the snapshot role, which provides a sense of consistency of all the different metadata that’s on the repository. This will be important because of the next role I’ll introduce, but basically it makes sure that you have a list of all the images that are currently valid, which means that you can hash that and have a timestamp on it, which gives you that time in this property. If you check the timestamp, which has a hash of the snapshot in it, you can make sure that any package you’re downloading is the one that’s currently valid today, and not one that was valid at some different point in time. Then, finally, of course, we have the root of trust, which provides the keys that should be used by all these other top-level roles, as well as a self-updating feature for the role itself.
ITE-2: Combining TUF and in-toto
What does it look like when we actually combine these two pieces of technology? This is part of this goal of getting end-to-end software supply chain integrity. You have to protect the whole system, not just pieces of it. You have TUF which can securely distribute not just those packages that we talked about here, but you can actually put in that right-hand side of the image, there was in-toto layouts that you want to securely distribute, as well as the attestations, and all the other pieces of metadata that you need. Then you can have these layouts distributed from secure roles with high assurance. From the secure targets role, you can delegate a specific high assurance role that’s used for these layouts, so the policies are only signed by that high assurance role. These other roles which are used more often don’t have permission to sign them. Here’s a nice diagram of this in practice. This has actually been implemented at Datadog in practice for the past 5 years or so. There’s also an ongoing integration at an IoT company, Torizon, which has some interesting scalability properties as well. If you look at this picture of how these things fit together, in the top left over here, we have those TUF roles. This looks a lot like that picture I showed earlier. That’s just what you saw there. The main difference is what this targets is pointing to. The targets metadata has different signers, which sign the actual packages in the in-toto metadata. Then they have a direct delegation to the layouts and the policy pieces of it. That’s a more direct link, because it’s signed with those offline keys used in the targets, versus these ones down here used every day to sign new packages, used more frequently which makes them necessarily slightly less secure. Those are just separated. If anything goes wrong, you can obviously revoke it and all of those things.
Demo
We’re going to start out with a TUF repository. These are the roles that we saw in TUF, we have the root snapshot, targets timestamp. This is all generated using an open source tool called the Repository Service for TUF, which is a new OpenSSF project, which basically is working on the usability of spinning up these TUF repositories. We use that for the demo. These bins roles are delegated targets roles. They’re just done in an automated way. This is really useful for certain applications, but it’s just done by default in this implementation. Going into each of these, we have the root.json, which contains, as you can see all those roles I talked about before. It has those delegations to the snapshot, the timestamp listed in here with trusted keys, and all of that. Then we have the targets, which currently just includes these succinct delegations. This is just an automated delegation format to those bins on the repository. Timestamp, it has the hash of the snapshot, like I mentioned. These bins are currently empty, this is just the starting state of the demo. As you can see, there’s no targets listed.
Now we’re going to actually do something interesting. We have Alice, who is the supply chain owner, who’s going to define this policy. She’s going to define an in-toto layout that defines everything that should be done in our supply chain. She’s defined the TUF. Then we have this create step, a build step, what should happen there, the expected outputs. The fact that Alice should sign the result of the build, and then an inspection, which is basically just a comparison to make sure that the output matches. This is our initial state of the supply chain. This is a format that’s defined by in-toto, but it’s hopefully readable by humans. Then we’re going to upload all of those to our TUF repository. Now, if we refresh this, then the next version of the metadata of these targets includes the layout signed by Alice’s public key, the root.layout. In the other bin, we have her actual public key, because you have to distribute not just the layout, but also who signed the layout, because you don’t know a priori what Alice’s public key is. This is where you’re getting that is through another one of these TUF delegations. Alice created the project. She’s actually doing the supply chain. As we go along the supply chain, she’s creating these links, these attestations to the different steps that are done. Now building it. Creating all that metadata. Then, we’re going to see in the repository, once she uploads it. All that metadata is now going to appear in these delegations in the next round. Now there’s a lot more stuff going on here, there’s a lot more targets. We have the links. Each of the links are included as their own artifact, as well as here we have the wheel, which is the Python final artifact that will be distributed. We still have that root layout, and the key, and another link. In practice, again, there would probably be an offline role signed with more secure keys for specifically the layout and the public key of Alice. For the sake of the demo, these are all combined, because we’re doing it all online. Again, in practice, you probably separate that out a little bit more.
Now we have the client who’s actually verifying that everything happened. What the client does is all they have to do is look at this in-toto layout, which is defined by Alice, and make sure that everything that was defined in that layout actually happened by the expected actors. This is doing in verbose mode, and so you can see all the different verifications steps that happen. It makes sure that all the steps happen. That passed because Alice did in fact make this project, and we printed out our hello world example.
Let’s make this a little bit more interesting. What if we then change the layout so that it’s not just Alice building this project all by herself. We’re going to clear out her state a bit, get rid of all those targets that were in there. This is just to show you that it’s empty now. Then we’re going to reupload the layout.
This is a new layout. This is the second layout. We now have a new key here. We have a Bob role that was defined. Then in addition, we have this update step, which was added to the supply chain. Not only was the project created, it can now be updated by somebody else before it’s then built and sent out. Bob has permission for this update step. As you can see, Alice now generated the project. This is the video just uploading the layout, as before, in the public key. Then, Bob is going to pull the project and make some changes to it, which is allowed again by the supply chain steps. Then he’s going to change it and upload the links about what exactly changes were made. Then Alice is going to build it and upload it as before, getting all that metadata pushed to the repository. Now we can see again, we have the links. We’ll have the final artifact in here as well.
This is actually interesting too, the way you actually link these things together. When you actually update the wheel, you have to know which layout is associated with it. This is our method in the metadata for linking things together. There’s a bunch of little annotations, and they’re all specified if you’re interested in learning more there. Basically, all the different things are linked together. In this demo, we just have one layout for one project. Of course, in practice, you have 100 layouts for 100 projects, and so you can tie this all together and include them all in the same repository. The client now downloaded it, bin all that verification again. As you can see, it’s all binned and passed. That’s all the steps that happened. Now we see that it’s the same project, but Bob was also here.
Now we have the really fun part or dangerous part, depending, is now we have an adversary who tries to tamper with the supply chain. This is an adversary who does not have access to either Alice or Bob’s private keys. There’s I think some details about which attacks can and can’t be caught, as always, but in this example, yes, the adversary does not have access to Alice’s private key. The attacker built the project, they were able to upload it to the repository, but they didn’t have the proper keys. As you can see, if you look in this one the attacker uploaded, you can see the hash of this one. This is the hash of the wheel that was uploaded, which of course, that’s OFO, she’s going to, of course, be different than the VE6, which was the valid one from the valid supply chain owners. Basically, this is the malicious one. The verification failed, because this was an improperly signed metadata file. The first step succeeded, but then you realize that this was in the disallow rule for this step. We’re going to force download it because this client decides they want to run the code anyway, even though they’re unsecure, and you see the evil change. This is the bad one, this is the one that we did not verify.
Summary
Basically, the point there is that you can put these two things together. We’re building this tooling to really make this easy, because I think each of these different pieces solves a piece of the problem, but you really need to put it together to build this end-to-end work. It’s the main goal, particularly there. You can use TUF to distribute in-toto metadata, as well as the actual artifacts themselves to get end-to-end software supply chain integrity. You’re tying all these steps together. You’d have the output of one go into the input to the next. You could have this layout signed very securely with these offline TUF targets keys to add a layer of compromise resilience, as well as the verification properties that are present in TUF. We’re preventing replay attacks as well, so we can’t use old policies or old attestations.
I think we have a couple of places we’re already building this in practice. This is designed to be used in practice. It’s used by Datadog. There’s an ongoing integration with Toradex, which is an IoT manufacturer. Also as well, we’re working with some of the open source communities, folks like the Python Packaging Index, RubyGems, and so on, about how to do all this stuff. All of this work is open source, academic, so we’re able to collaborate openly, which is always fun.
Software Supply Chain Security and Web Systems
Right now, we focus a lot on software distribution, supply chain security aspect of this. I think it always has been interesting to look at this in comparison and around the PKI web systems, because I do think that one of the big things is that this area’s a lot more distributed. You can actually download your software from a lot more people than you have web browsers, for example. This idea of roots of trust, you end up having to have a bunch of them. Whereas something like the web, you have actually much fewer of them, even though you still have this large collection of CAs that you trust. I do think that there are interesting applications of this, in that space. I haven’t investigated particularly how this would apply all the way down to the browser. I do think this idea of root of trusts is fundamental to, why do you trust different things? How do you know who you’re trusting? How do you know why you trust them? All of those pieces. It can be applied there. We haven’t done that. Definitely something that’s interesting to look at.
Questions and Answers
Participant: I think that cryptographic hashes are a pretty much outside mechanism for proving exactly what you mentioned, that an artifact is the same as it was 10 years ago, apart from hash provisions and quantum computers.
Moore: The quantum computer is different.
Participant: That was a thing before ChatGPT was released, so it’s been released now, what was it generating?
Participant: I can see how this mechanism created a secure supply chain in a somewhat isolated environment. What about the external components of your supply chain, because a lot of repositories that distribute packages don’t use signing at the moment. How do you incorporate the nasty outside world into your lovely secure supply chain?
Moore: One of the things about in-toto is that it’s fairly unopinionated. in-toto allows for really strict, strong requirements that says every single thing has to be signed, and this thing has to exactly lead to that thing, which of course is the goal we’re all working towards. It’s aware that there are steps that maybe differ. We don’t live in a world, for example, with reproducible builds, and so you have to just trust the build system to do what it’s going to do. Because if you build something twice in two different computers, the hash output will be different. It’s aware of these kind of limitations in existing systems. Basically, it’s just by defining a layout that says, yes, we know we’re pulling in some untested dependencies today, and then maybe sometime down the line you can update that layout to say, no, every dependency that’s pulled in should be verified by an engineer, or whatever the process is.
Participant: npm published a worrying statistic in 2020 that only 8% of maintainers use two-factor authentication, the other 92% use user name and password. Most people dislike passwords, so getting into that supply chain is not that hard.
Moore: A lot of those package repositories are working on improving at least the 2FA piece. Even then you have things like typosquatting. People are just pulling random code.
Participant: We’re almost to that point, as you were showing us the Datadog example, it seemed like they were building their own wheels because they weren’t prepared to trust the ones that [inaudible 00:34:42].
Moore: I think that’s the reality today is that, you should know if something comes from the source, you have to do it yourself. I think that the open source community is working towards a better world, but I don’t think we’re there yet, just quite.
See more presentations with transcripts
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. which can be found using ticker (MDB) have now 26 market analysts covering the stock. The analyst consensus now points to a rating of ‘buy’. The range between the high target price and low target price is between $500.00 and $250.00 with the average target price sitting at $436.12. Given that the stocks previous close was at $401.91 this indicates there is a potential upside of 8.5%. It’s also worth noting that there is a 50 day moving average of $357.59 and the 200 day MA is $319.95. The company has a market capitalization of 28.72B. The stock price is currently at: $402.57 USD
The potential market cap would be $31,165,903,959 based on the market consensus.
The company is not paying dividends at this time.
Other points of data to note are a P/E ratio of -, revenue per share of $21.28 and a -6.7% return on assets.
MongoDB, Inc. is a developer data platform company. Its developer data platform is an integrated set of databases and related services that allow development teams to address the growing variety of modern application requirements. Its core offerings are MongoDB Atlas and MongoDB Enterprise Advanced. MongoDB Atlas is its managed multi-cloud database-as-a-service offering that includes an integrated set of database and related services. MongoDB Atlas provides customers with a managed offering that includes automated provisioning and healing, comprehensive system monitoring, managed backup and restore, default security and other features. MongoDB Enterprise Advanced is its self-managed commercial offering for enterprise customers that can run in the cloud, on-premises or in a hybrid environment. It provides professional services to its customers, including consulting and training. It has over 40,800 customers spanning a range of industries in more than 100 countries around the world.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ
Meta AI Research announced two new generative AI models: Emu Video, which can generate short videos given a text prompt, and Emu Edit, which can edit images given text-based instructions. Both models are based on Meta’s Emu foundation model and exhibit state-of-the-art performance on several benchmarks.
Emu Video uses a factorized or two-step approach for video generation: first generating an image based on the text prompt, then generating a video from the prompt and generated image. Both steps use a single fine-tuned Emu diffusion model, unlike previous methods such as Make-a-Video which use a pipeline of distinct models. Emu Edit is also based on the Emu diffusion model, but includes a task-embedding layer, which converts the text instruction prompt into an additional conditioning vector. Both Emu Video and Emu Edit were evaluated by human judges, who rated those models’ outputs on generated image quality and instruction faithfulness. Both models outperformed baseline models a majority of the time; in the case of Emu Video, 91.8% of the time on quality and 86.6% on faithfulness. According to Meta,
While certainly no replacement for professional artists and animators, Emu Video, Emu Edit, and new technologies like them could help people express themselves in new ways—from an art director ideating on a new concept or a creator livening up their latest reel to a best friend sharing a unique birthday greeting. And we think that’s something worth celebrating.
The Emu foundation model was announced earlier this year at the Meta Connect event. It is a latent diffusion model that is pre-trained on over 1 billion image-text pairs, then fine tuned on “a few thousand carefully selected high-quality images.” Emu can generate “highly visually appealing” images, with human judges preferring its output to Stable Diffusion XL over 70% of the time.
To create Emu Video, the researchers used a dataset of 34 million video-text pairs to further fine-tune an Emu foundation model; the model learned to predict several future video frames given an initial frame image. The resulting model can produce four-second long videos of 512×512 pixels at 16 fps. In addition to text-to-video, the model can generate a video from a user’s image; for this task, its output was preferred 96% of the time over that of the baseline VideoComposer model.
To train Emu Editor, the Meta team created a synthetic dataset of 10 million samples. Each sample consists of an input image, a textual instruction, a desired output image, and a task index. The index indicates which one of sixteen predefined tasks the instruction represents, such as removing an object or changing the image style. During training, the model learns an embedding for each task. The model can learn a new task by fine-tuning the embedding layer on just a “handful” of new examples.
In a discussion on Reddit, one user posted that:
The most interesting thing here is [the] appendix where they describe how they create the training dataset. They use a toolchain involving LLaMA, DINO, Segment Anything, and an image generator to create millions of image -> instruction -> output pairs. This is a real success story for synthetic data.
In a discussion on Hacker News, several users expressed disappointment that the models have not been open-sourced, stating that “Meta had been on an open source roll lately.” Meta did create a demo website for both Emu Video and Emu Edit. Meta also released the Emu Edit benchmark dataset on Huggingface.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Document database Couchbase is adding a columnar side-car to boost analytics performance for users who want more insight into their real-time data.
Announced at AWS re:Invent 2023 in Las Vegas, the new service introduces a columnar store and data integration into the Capella Database-as-a-Service (DBaaS) for applications such as customer profiling and special offers.
The in-memory analytics system is only available as a package with the main DBaaS, but offers support for Tableau and PowerBI for analytic development and visualization, the company said. It also launched a conversational coding tool dubbed Capella iQ designed to help developers use natural language interactions with ChatGPT for SQL++ development. Other LLMs will be added in the future.
The new service will be in private preview from next year and is expected to become generally available in fall.
Keen industry watchers may notice that MongoDB, a fellow document database designed for modern internet-native applications, added analytics features last year. The company created column store indexing to help developers create and maintain a purpose-built index to speed up many common analytical queries without requiring any changes to the document structure or having to move data to another system. Analysts said it might be a good system for straightforward queries but not complex modeling.
Couchbase emphasized the differences in its approach. It said MongoDB created a duplicative indexing structure against the data that persists in its singular storage engine, WiredTiger. Couchbase claimed WiredTiger consumes half the available memory when in use, “which is one of the reasons that MongoDB does not scale as efficiently as Couchbase and Capella.” The Capella approach means both columnar and document engines work and scale their workloads independently while living in the same cluster, the company claimed. We’ve asked MongoDB about this and will update if they respond.
Chris Bridgland, senior director, solutions engineering and customer success Europe, said the columnar store supported Avro files popular in the telco industry among other features allowing the system to analyze third-party data in its DBaaS.
He said another difference in the Couchbase approach was that it creates the index and the schema on the read, rather than before bringing in the data.
“We were already seeing in the current testing around about two to two and a half times improvement in performance,” Bridgland said.
Doug Henschen, vice president and principal analyst with Constellation Research, said the columnar database move would appeal to Couchbase customers looking for more analytical capabilities to go along with the platform’s transactional capabilities.
“The company has had analytical capabilities for five years and more than 30 percent of its customers are using them,” he said. “This announcement brings analytical performance to the next level as required by many next-generation applications that blend transactional and analytical needs. For example, a large number of Couchbase customers store tons of loyalty program data. Capella’s columnar and real-time capabilities could be used to power personalized offers and recommendations to customers in near real time.”
However, it was not necessarily a competitive gain against MongoDB, which has had analytical capabilities for several years and deepened them last year. “I don’t see it so much in a competitive context as I do in a case of two database providers helping customers to build next-generation applications including analytics on their respective platforms,” Henschen added.
He said it would be difficult to determine the differences in performance until customers started building proof-of-concept systems.
While Snowflake and SingleStore have both laid claim to performing analytics and transactions on the same system, the emergence of so-called “transanalytical” capabilities has been greatly exaggerated, Henschen said.
“There are niche databases that support both,” he said, “but they’re far from mainstream. And both Couchbase and MongoDB will be the first to tell you that their respective analytical capabilities will not displace or compete head-on with analytical data platforms like Snowflake or Databricks. What they’re after is providing analytical capabilities for the operational data within the applications built on their databases. I think we’ll continue to see separate, specialized products for some time to come.” ®
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
Welcome to the InfoQ podcast
Hello, it’s Daniel Bryant here. Before we start today’s podcast, I wanted to tell you about QCon London 2024, our flagship conference that takes place in the heart of London next April, 8th to 10th. Learn about senior practitioner’s experiences and explore their points of view on emerging trends and best practices across topics like software architectures, generative AI, platform engineering, observability, and the secure software supply chain. Discover what your peers have learned, explore the techniques they’re using, and learn about the pitfalls to avoid. I’ll be there hosting the platform engineering track. Learn more at qconlondon.com. I hope to see you there.
Hello and welcome to the InfoQ podcast. I’m your host, Daniel Bryant, and this week we’re going to try something a bit different with a brief review of the recently released InfoQ Java and JVM trends report. Now, many folks in the InfoQ team and the wider Java community were involved in the production of this and we’ll give them a proper shout-out during the conversation, but I sat down with the primary author of this year’s report, Michael Redlich, who leads the Java topic here at InfoQ.
So, welcome to the InfoQ podcast, Mike. Could you introduce yourself to the listeners please?
Michael Redlich: Sure. Good morning. I’m Michael Redlich. I am the Lead Java Queue editor at InfoQ. I retired from ExxonMobil about five months ago after 33 and a half years of service. So, all of my work at InfoQ and other contributions to open source are my full-time job these days.
What are the headline takeaways from the latest InfoQ Java Trends report 2023? [01:21]
Daniel Bryant: We’re going to talk about the upcoming InfoQ Java trends report. So by the time this chat is published, so will have the report been as well. So we’ll definitely dive into who’s contributed to that report because it’s very much a community effort. You and I are talking about it today, but there’s a lot of folks behind the scenes contributing to this, so I’d definitely like to give a shout out to all those folks too. But what are the key takeaways for the Java trends report this year?
Michael Redlich: One of the first things that I think is foremost is Java virtual threats, JEP 444, which was released in September with JDK 21. There was just so much content out there, especially from the Oracle dev advocates and other folks that were providing a lot of information in the background and how to use virtual threads and those kinds of things. One of the other things that is really new this year is a commitment from Oracle to evolve the Java language for students and beginners so they can more easily write their first Hello World applications without the need to understand more complex features of the language.
And related to Java 21 were also four features that had gone through their incubator and preview releases, and now were finalized for JDK 21. So examples of course are virtual threads, pattern matching for switch, and record patterns. That’s three of the four that I can recall at the moment. A new project to complement Project Loom, Project Valhalla is Project Galahad, and this was related to GraalVM aligning themselves with releases for the open JDK releases every six months. And so this was created to contribute Java-related GraalVM technologies and prepare them to be in an upcoming release of a JDK.
Another new interesting feature from this past year is a new Microprofile JWT bridge specification, and this was a collaboration between the Microprofile and Jakarta EE working groups. And this was a way for Jakarta’s security applications to use the micro JWT specification. And this is in a single annotation, so this is still a work in progress. From what I understand, the folks are trying to have this in the release of Microprofile 7.0, which will be sometime next year. So this is very early. There are some examples out there on how to potentially use that. So I’ve experimented a little bit with it and I think it’s going to be a fun feature.
It’s synonymous with Jakarta NoSQL or other Jakarta EE specs that use Microprofile config. So it’s a similar kind of relationship with this. So I think this is new and exciting and look forward to that being released. So those are the highlights. We have lots of good content in the upcoming release of the InfoQ Java trends report, so stay tuned for that.
Who contributed to the creation of the latest InfoQ Java Trends report? [04:15]
Daniel Bryant: Fantastic, Mike, fantastic. Yes, no, I’ve definitely got to shout out that we do look at, say surveys, we look at data because I know a lot of folks reach out to us and say, “How do you do these trend reports?” And they are very much opinion pieces. We do look around, but you and the team and we bring in other folks in the community to contribute to as well. Do you want to shout out to anyone, Mike? I know that you’ve led the initiative this year, but there’s many voices behind this report, right?
Michael Redlich: Yes, so we have awesome editors in the Java space and the main contributors were myself and Johan Janssen. And then we have quarterly meetings with the group and we discussed the crossing the chasm model this past August and what technologies should move from in the various spaces of that crossing the chasm model. So that included Ben Evans, Erik Costlow, Karsten Silz, Olimpiu Pop, Bazlur Rahman, and Shaaf Syed. And then the external contributors were Ixchel Ruiz, Developer Advocate at JFrog, Alina Yurenko, Developer Advocate for GraalVM at Oracle Labs, and then Rustam Mehmandarov, Chief Engineer at Computas AS. So these are some great contributors, they’re all Java champions, so they really provided a lot of great input.
Is the latest version of the Java language and platform, Java 21, seeing large adoption? [05:25]
Daniel Bryant: Fantastic, Mike. Yes, there’s so many familiar names there, both practitioners and dev advocates, and now owning my bias, as a previous dev advocate, I’m obviously a big fan of listening to both of these voices. InfoQ is very much based on the practitioner role. It’s by practitioners for practitioners and it’s great to learn about use cases and specific implementations. I think the value that dev advocates can often bring is providing their bigger picture look across the industry and helping us pattern match on common issues and solutions.
So, to get started on our analysis, I wanted to dive into Java adoption first. Now obviously, we had the release of Java 21 in September, which both you and Bazlur Rahman, and many of the Java team have covered already on InfoQ. So listeners can check out the coverage there, but I wanted to get your thoughts on what’s the real world adoption been like. Now, Java 21 will be a LTS release for many vendors, so that’s long-term supported, like commercially supported often in comparison with the shorter support windows often offered for the minor version updates. So are folks rushing to Java 21 or more slowly making their way to Java 21 or maybe even the last LTS release, which was Java 17, right?
Michael Redlich: Alina Yurenko said that she sees the speed of adoption of the latest Java versions increasing, and she’s seen this at conferences and questions that she gets from folks in the Java community. And then they had their own community survey last year and she said that 63% of their users were already on Java 17 or higher. So it seems like yes, there is more adoption. Java 17 was the last LTS release before Java 21.
What are the most exciting features of the Java 21 release? [06:59]
Daniel Bryant: Fantastic. So folks are sort of getting onto it. What do you think are the most exciting features and tools in this Java 21 release?
Michael Redlich: Oh, virtual threads for sure. So, I know sequence collections is still in preview, but that should be finalized at some point within the next couple of Java releases, record patterns, pattern matching, the unnamed classes and instant methods, that’s still in preview and that’s the JEP for the beginners. We got a key encapsulation mechanism API, which is for improved security, pattern matching for switch. Oh, foreign function and memory, I think will be a final preview in JDK 22. And then generational ZGC is another one that didn’t go through the preview or incubation process, but that was a final feature right away. So there’s a lot of good stuff that is in JDK 21. I believe there were 15 new features. So, as opposed to the last few years, maybe six, seven, eight, nine, going back to I think Java 9 had the most set of jets that were available. So yes, this is really an interesting time for Java.
So yes, look for more coming up and JDK 22, we have, just as a preview, we already have foreign function and memory API that will be a final feature, unnamed variables and patterns, the vector API, that’s been part of Project Panama for a long time, it’s going to see its seventh incubation. So, I think that pretty much projects that, it will probably see a couple of previews as well. And then string templates is another new feature that will be in its second preview for JDK 22. So the review for that one ends on Wednesday. There’s usually a week once it’s proposed to target, so I anticipate seeing that as targeted for JDK 22. So that’s four at the moment.
What are your thoughts on the latest ZGC garbage collection updates in Java 21? [08:49]
Daniel Bryant: Fantastic, fantastic. Just going back to the 21 stuff you mentioned there, Mike, I’d love to get your thoughts on virtual threads in just a second, but I think another thing that would jump out to our listeners is also the Z or ZGC changes. I know we saw Suhail Patel at, I think it was QCon New York, and maybe also QCon San Francisco. He talked about the massive performance impact that can potentially have. I think he was running some Kafka clusters, something like this, and he was saying this new GC model can really stop the world collections, right? I don’t know if you’ve got any more thoughts on that.
Michael Redlich: I’m ashamed to admit that I haven’t experimented with a lot of the Garbage Collection in Java. I’m familiar with what’s out there, but that’s about all I can say.
Daniel Bryant: Sounds like you know just enough garbage collection to be dangerous, Mike, right? So very similar to myself, I’ll be honest.
Just looking at my notes here and I can see that there’s ZGC, or ZGC Garbage Collector was introduced in Java 11 as JEP 333, and it was a low latency, high scalability garbage collector. And now with Java 21, it’s evolved into a generational garbage collector. I think previously, even without handling generations, ZGC was quite an improvement with GC pause times, which many of us have bumped into, those stop the world pauses can be really impactful on applications or data stores that use a JVM, but with the old version of ZGC, all objects were stored regardless of their age and all of them had to be checked during every GC run I believe.
With Java 21, ZGC splits the heap into two logical generations, one for the recently allocated objects and another for long-lived objects. So now the GC can focus on the collecting younger objects more often and without increasing pause time. And this is what Suhail referenced in his QCon talk. We definitely recommend consulting experts when choosing your garbage collector. Just to highlight, again, Mike and I are not GC experts. Please don’t just choose your garbage collector at random and definitely don’t use the random GC command line flags. I’m sure many of us, early on in our careers, were always looking for the magic incantations to put on your command line flags, right? Please don’t do this. I learned from my mentors, I think Mike’s mentors as well, Ben Evans and Martijn Verburg, this really isn’t a good look and if I ever bump into GC challenges in my general day-to-day life, I do consult these kind of experts as well. So we thoroughly encourage you to do the same.
What are the interesting trends in Java EE, Jakarta EE, and web application development? [10:51]
Moving on to the Java EE or Jakarta EE kind of space, the enterprise edition space, and there’s lots of stuff, obviously microservices, everyone is developing microservices these days or seemingly everyone. How does the latest version of Java and Jakarta play into developers that are building microservices?
Michael Redlich: So, Jakarta EE 10 is the latest version out there. Jakarta EE 11 is scheduled to be released in the first half of next year, but the working group is looking to put out a milestone one on December 5th, so I look forward to reading about that. I can tell you there are 16 new or upgraded specifications for Jakarta EE 11, including the new Jakarta Data. And that spec is designed to be sort of an abstract level above Jakarta persistence and Jakarta NoSQL. So basically, you’ll have the NoSQL and the relational world in order to use Jakarta data to create database back end applications more easily. So that’s an exciting thing.
Jakarta NoSQL won’t be on the platform profile, unfortunately, but hopefully maybe for Jakarta EE 12. But it is available for developers to use and I have a great beer application to demonstrate how it works… Anyway, there’s a lot of great things out there. I know Jakarta security will be upgraded, Jakarta Servlets, so Servlets, of course, have been around for a long time, going back to the Java EE days, I think even back, I want to say probably the early 2000s.
Daniel Bryant: Yes, I was coding on those back then, but raw Servlets, that’s where I started my career. Yes, very much so.
Michael Redlich: Right. So that is good to see that spec evolving and being out there for developers to use. Jakarta Expression Language is another spec that’s going to be updated. Jakarta Faces, I believe. So the old Java Faces API. So there’s a lot of great things that are out there for that. I think there’s 42 specs altogether in Jakarta EE.
What are the interesting trends in microservices and web application development? [12:45]
Daniel Bryant: Fantastic. Now, I was chatting to my buddy, Josh Long, you know Josh as well. Josh is a legend in the Java space, in the Spring space, and he was doing a fantastic talk, I think it was IT Connect in Belgrade, and he showed us the latest features of Spring, Spring Cloud. But of course there’s Helidon you’ve talked about, there’s Micronaut, there’s many others. What’s your general read on the space? There’s a lot of microservice-type frameworks, cloud-type frameworks popping up in the Java world. Any interesting takeaways from the trend report on those?
Michael Redlich: Helidon 4 was just released not too long ago, and the big feature in that is their Helidon Nima, they’ve rebuilt their web server from the ground up. So the previous web server component in Helidon SE was based on Netty, but that has been redone and now it’s a full virtual thread web server. I haven’t had a chance to really experiment with it yet, but Oracle claims that there is performance benefits from using this new web server.
Micronaut, I know they’ve got a lot of components for building applications, so they are evolving. Their version 4.1.6 is their latest release and anybody who’s out there who’s familiar with Grails, it’s the same kind of syntax on the command line to build those applications in Micronaut.
Let’s see, Quarkus, that’s the best-of-breed libraries, as they say. They say it’s super fast, supersonic Java. But yes, so Quarkusis a collection of libraries that developers can use to build applications. So that one’s unique in that regard. Helidon is different because it has their SE and their MP, for Microprofile, version. So the components are different depending on what you want to use. The application server, I believe is built into Helidon MP, that’s one of the differences. But they’re all great to use, really can’t say which one’s better than the other. It’s one of these things where depends on the application that you want to build. That’s the best thing I can recommend in that.
And the best thing too, all of these frameworks have a starter page. So you go and you can click on what you want and then it’ll download a zip file from you and you can easily get a starter application going just by doing that.
Daniel Bryant: That’s fantastic, Mike. Something I’ve noticed more in general, so going back to Josh’s talk, the latest versions of Spring Cloud, Spring Boot are sort of skewing towards ease of getting started. You mentioned already, even the Java language itself, there’s been a concerted effort to make it easier for folks, and I’ve seen things like Spring Modulith, Oliver Drotbohm has talked about that quite a bit, and making it easy to do the right thing, easy to get started, easy to do the right thing. So I’m liking that. That’s one thing I think you do get with a mature language stack, like Java, right? Compared to some other perhaps more earlier stage languages, which again, I love as well, but I think the Java stuff, we’ve come through the ringer over the years and we sort of know the good things to do hopefully and the bad things to do. So I’m definitely seeing the microservice frameworks making it easier to get started and do the right thing.
Michael Redlich: Yes, absolutely. And I think that’s a great thing, especially if you’re new to it. This way, you can get a feel for how things are wired together, especially configuration files and things like this. So yes, Jakarta EE also has a starter page that folks can use to get started as well.
What has the community reaction been to the latest JVM startup developments in the Java ecosystem, such as CRaC and GraalVM? [15:46]
Daniel Bryant: Fantastic. We’ll try and link some of those in the show notes, Mike, make it easy for folks to have a play around because I’m totally supporting what you’re saying – there is no one size fits all here, have a play with these things, understand them. That’s the benefit of the trends report, right? We give you the insight as to the interesting things that we think you should be looking at. That’s fantastic. Love to dive into a couple of more technical things, Mike, then we can look at the community reaction and perhaps a look to the future. You’ve already mentioned virtual threads, which I think is fantastic. I also saw a fantastic talk by Gerrit Grunwald at IT Konnect around the Coordinated Restore At Checkpoint, CRaC, feature. That one came up, I believe, in the trend report too is an interesting piece of tech.
Michael Redlich: Yes, that I know Azul has released their downstream distributions of OpenJDK with CRaC, C-R-A-C. Yep. I always found that acronym to be funny. But anyway, yes, so that is something that’s already going to be built in for developers and I’m looking for exciting things coming from that. And this whole native Java, Spring native, and GraalVM and Project Latent, those will all be part of Cold Start, I guess as it were, for especially big Java applications.
Are other Java and OpenJDK distributions proving popular? [16:52]
Daniel Bryant: Fantastic, fantastic. You touched on that, you mentioned Azul, of course there’s many open JDK distributions these days. We’ve seen Corretto by Amazon, Azul of course, Oracle, loads of folks have got them. There is community distributions out there. Could you share a little of insight for the listeners as to why they might consider various different distributions? Any thoughts on the different options out there?
Michael Redlich: I can’t think of any for instances, but yes, BellSoft’s another one, I know one of the things they do is they maintain the CPU or their critical patch updates that are aligned with Oracle’s. So they provide those updates and that’s by BellSoft.
Daniel Bryant: I’ve seen them in build packs actually, Mike. I like using the CNCF Build Packs project and BellSoft popped up a lot in there as, I think, one of the default Java providers. So I played with that.
Michael Redlich: Yes, I’m trying to think. I know one of the other downstream distributions, I believe, includes JavaFX or JFX.
Daniel Bryant: Oh, okay.
Michael Redlich: It’s great to see the downstream distributions taking the open JDK and then build it, but add in their own features. And then I believe there is also some, I guess upstream, so if Oracle feels like whatever a vendor has done on the downstream end, they can probably backport that over into open JDK. So I think it’s a great relationship. And that’s the beauty of open source.
What does the future of Java look like? [18:04]
Daniel Bryant: Yes, no, 100%, 100%. Fantastic. We’ll link a few of those we’ve mentioned there in the show notes as well, so you can play around with that. Before we wrap up, Mike, I’d love to get your thoughts on what’s the community reaction been to Java over the last years? You’ve very much got your finger on the pulse, you and the InfoQ Java team. You are hearing what comments have come back on the news and you’re going to conferences and chatting to folks. What do you think the future of Java looks like?
Michael Redlich: I think this is an awesome time to be part of the Java community and to be using Java. I still laugh because I still see references that Java is dead.
Daniel Bryant: Yes, same, I do.
Michael Redlich: And that’s not even close to what’s happening. With Java EE having been donated to the Eclipse Foundation to create an open source version of the enterprise edition, I think that is just a great thing for the Java community to contribute. A great example is Microprofile, they started in 2016 and they used the CDI, JAX-RS, and I think JSONP as their original three specs that were part of the JSRs back then, but then the community folks are the ones who created metrics, health, fault tolerance, config, and this was outside of Oracle. So it’s a beautiful thing, I think, and I know developers are out there, they’re excited about Java. You still have your folks that I guess don’t like Java. I know one person in particular from a former computer club would prefer to use Go, but that’s his choice. That’s fine.
But yes, I think there’s a lot for Java developers out there. We talk about Quarkus, Helidon, Micronaut. Then Spring, of course we haven’t talked too much about Spring, but the Spring framework has evolved so much in the past 20 years. I think next year is the 20th anniversary. I don’t know if it already happened or if it’s next year, but I believe Spring, with their dependency injection, was in response to the complexity of enterprise Java Beans.
Daniel Bryant: I remember that.
Michael Redlich: And it was just dependency injection. And look at how that’s evolved to all the Spring projects, like Spring Boot, of course, Spring Cloud, Spring Data. You can go on. There’s, I would say, close to 15 to 20 projects.
How should listeners track the latest security issues and CVEs? Is the InfoQ weekly Java roundup news piece a good place to start? [20:08]
Daniel Bryant: So after the Log4Shell vulnerability that was discovered back in 2021, we had lots of coverage in InfoQ, lots of coverage on the internet in general on that one, and that actually still is being exploited, just a heads up to folks. Recently I read a report, there’s a shocking number of artifacts or libraries, Log4j libraries that are still vulnerable, are being downloaded via Maven Central and other places. So please don’t do that. Please make sure you update your version of Log4j. But I wanted to ask in general, Mike, is there anything new in the world of Java project CVEs?
Michael Redlich: I see it a lot when I look for news items for the Java weekly roundups. A small point and release will address a particular CVE, and I try to capture all those.
Daniel Bryant: Fantastic. That’s a shameless plug, Mike. I think it’s worth doing, right? I always enjoy your weekly summaries. I know you and the team basically scour the internet looking for things like this, right? All the latest framework updates. I know you summarized the JEPs and their current status within the open JDK. So, folks basically follow you on InfoQ – is, I think, the key call to action there, Mike, right?
Michael Redlich: I have a lot of folks that follow my profile, so I appreciate their support. First off, it’s come down to a science in capturing all this and having bookmarks. I check every week and follow along the mailing list in the open JDK space and all that. And just as a reminder to the listeners, the weekly Java roundups were created to capture all those small point releases that wouldn’t be necessarily worthy of a full detailed news piece, but at least it gives the Java community to see what’s actually happening, and just to follow along. So yes, this was actually started by Ben Evans, but that was a great idea and I took that over when I took over as the lead.
Where to reach out to Mike and the InfoQ Java team [21:47]
Daniel Bryant: Fantastic. If folks want to find you, Mike, where’s the best place to connect? Obviously InfoQ, but are you on Twitter? You on LinkedIn? Where’s the best place for folks to reach out?
Michael Redlich: Yes, I’m on LinkedIn. I’m sure I’m one of the few persons named Redlich out there. And then my Twitter handle is mpredli, I try to remain a little active on Twitter. And of course, InfoQ advertises all the news releases on Twitter as well.
Daniel Bryant: Fantastic, Mike. Well thank you very much for your time today. We’ll be sure to link the final trend report off the show notes as well when that one’s released. And I really appreciate your input, Mike. Thank you very much for chatting today.
Michael Redlich: Oh, thanks for taking the time to have this chat. It was great.
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB (NASDAQ:MDB – Get Free Report) is scheduled to post its quarterly earnings results after the market closes on Tuesday, December 5th. Analysts expect MongoDB to post earnings of $0.49 per share for the quarter. MongoDB has set its Q3 guidance at $0.47-0.50 EPS and its FY24 guidance at $2.27-2.35 EPS.Investors that are interested in registering for the company’s conference call can do so using this link.
MongoDB (NASDAQ:MDB – Get Free Report) last released its quarterly earnings data on Thursday, August 31st. The company reported ($0.63) earnings per share (EPS) for the quarter, topping analysts’ consensus estimates of ($0.70) by $0.07. The business had revenue of $423.79 million for the quarter, compared to the consensus estimate of $389.93 million. MongoDB had a negative return on equity of 29.69% and a negative net margin of 16.21%. On average, analysts expect MongoDB to post $-2 EPS for the current fiscal year and $-2 EPS for the next fiscal year.
MongoDB Price Performance
Shares of MongoDB stock opened at $401.91 on Tuesday. The stock has a market cap of $28.68 billion, a price-to-earnings ratio of -116.16 and a beta of 1.16. The company has a debt-to-equity ratio of 1.29, a quick ratio of 4.48 and a current ratio of 4.48. The stock has a 50 day moving average price of $358.56 and a 200-day moving average price of $365.43. MongoDB has a one year low of $137.70 and a one year high of $439.00.
Analyst Ratings Changes
Several research firms have weighed in on MDB. JMP Securities boosted their price target on MongoDB from $425.00 to $440.00 and gave the company a “market outperform” rating in a research note on Friday, September 1st. Piper Sandler increased their price target on MongoDB from $400.00 to $425.00 and gave the stock an “overweight” rating in a research report on Friday, September 1st. Capital One Financial raised MongoDB from an “equal weight” rating to an “overweight” rating and set a $427.00 target price for the company in a report on Wednesday, November 8th. Tigress Financial increased their price target on MongoDB from $490.00 to $495.00 and gave the company a “buy” rating in a report on Friday, October 6th. Finally, Scotiabank started coverage on MongoDB in a research note on Tuesday, October 10th. They set a “sector perform” rating and a $335.00 target price for the company. One research analyst has rated the stock with a sell rating, two have assigned a hold rating and twenty-four have given a buy rating to the company’s stock. Based on data from MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and an average target price of $419.74.
Check Out Our Latest Analysis on MDB
Insider Transactions at MongoDB
In other news, CRO Cedric Pech sold 308 shares of the stock in a transaction that occurred on Wednesday, September 27th. The stock was sold at an average price of $326.27, for a total transaction of $100,491.16. Following the completion of the transaction, the executive now directly owns 34,110 shares of the company’s stock, valued at approximately $11,129,069.70. The sale was disclosed in a document filed with the SEC, which is accessible through this link. In other MongoDB news, CRO Cedric Pech sold 308 shares of the stock in a transaction on Wednesday, September 27th. The stock was sold at an average price of $326.27, for a total transaction of $100,491.16. Following the sale, the executive now owns 34,110 shares in the company, valued at approximately $11,129,069.70. The sale was disclosed in a legal filing with the SEC, which is available through this link. Also, CAO Thomas Bull sold 518 shares of the stock in a transaction on Monday, October 2nd. The shares were sold at an average price of $342.41, for a total value of $177,368.38. Following the sale, the chief accounting officer now owns 16,672 shares in the company, valued at approximately $5,708,659.52. The disclosure for this sale can be found here. In the last three months, insiders sold 321,077 shares of company stock valued at $114,507,479. Corporate insiders own 4.80% of the company’s stock.
Institutional Inflows and Outflows
Several large investors have recently made changes to their positions in the company. Jacobs Levy Equity Management Inc. acquired a new position in MongoDB in the third quarter worth about $2,453,000. Creative Planning raised its holdings in shares of MongoDB by 2.3% during the third quarter. Creative Planning now owns 5,139 shares of the company’s stock valued at $1,777,000 after buying an additional 114 shares during the last quarter. Osaic Holdings Inc. raised its holdings in MongoDB by 47.9% during the second quarter. Osaic Holdings Inc. now owns 14,324 shares of the company’s stock worth $5,876,000 after purchasing an additional 4,640 shares in the last quarter. Coppell Advisory Solutions LLC bought a new stake in MongoDB during the second quarter worth approximately $43,000. Finally, Alliancebernstein L.P. grew its position in MongoDB by 62.6% in the second quarter. Alliancebernstein L.P. now owns 264,330 shares of the company’s stock worth $108,637,000 after buying an additional 101,804 shares during the last quarter. 88.89% of the stock is owned by hedge funds and other institutional investors.
About MongoDB
MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Articles
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.
Which stocks are likely to thrive in today’s challenging market? Click the link below and we’ll send you MarketBeat’s list of ten stocks that will drive in any economic environment.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there’s always a fresh and insightful perspective to her reporting.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
The latest trading session saw MongoDB (MDB) ending at $401.91, denoting a -1.42% adjustment from its last day’s close. This change lagged the S&P 500’s daily loss of 0.2%. Meanwhile, the Dow lost 0.16%, and the Nasdaq, a tech-heavy index, lost 0.07%.
Coming into today, shares of the database platform had gained 21.59% in the past month. In that same time, the Computer and Technology sector gained 8.27%, while the S&P 500 gained 7.49%.
The upcoming earnings release of MongoDB will be of great interest to investors. The company’s earnings report is expected on December 5, 2023. It is anticipated that the company will report an EPS of $0.49, marking a 113.04% rise compared to the same quarter of the previous year. At the same time, our most recent consensus estimate is projecting a revenue of $402.75 million, reflecting a 20.72% rise from the equivalent quarter last year.
For the entire fiscal year, the Zacks Consensus Estimates are projecting earnings of $2.34 per share and a revenue of $1.61 billion, representing changes of +188.89% and +25.06%, respectively, from the prior year.
Furthermore, it would be beneficial for investors to monitor any recent shifts in analyst projections for MongoDB. These revisions help to show the ever-changing nature of near-term business trends. As a result, upbeat changes in estimates indicate analysts’ favorable outlook on the company’s business health and profitability.
Our research reveals that these estimate alterations are directly linked with the stock price performance in the near future. To take advantage of this, we’ve established the Zacks Rank, an exclusive model that considers these estimated changes and delivers an operational rating system.
The Zacks Rank system, stretching from #1 (Strong Buy) to #5 (Strong Sell), has a noteworthy track record of outperforming, validated by third-party audits, with stocks rated #1 producing an average annual return of +25% since the year 1988. Over the last 30 days, the Zacks Consensus EPS estimate has remained unchanged. As of now, MongoDB holds a Zacks Rank of #3 (Hold).
In the context of valuation, MongoDB is at present trading with a Forward P/E ratio of 174.4. This represents a premium compared to its industry’s average Forward P/E of 36.29.
The Internet – Software industry is part of the Computer and Technology sector. This industry, currently bearing a Zacks Industry Rank of 32, finds itself in the top 13% echelons of all 250+ industries.
The Zacks Industry Rank assesses the strength of our separate industry groups by calculating the average Zacks Rank of the individual stocks contained within the groups. Our research shows that the top 50% rated industries outperform the bottom half by a factor of 2 to 1.
Be sure to use Zacks.com to monitor all these stock-influencing metrics, and more, throughout the forthcoming trading sessions.
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
MongoDB, Inc. (MDB) : Free Stock Analysis Report
To read this article on Zacks.com click here.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Whales with a lot of money to spend have taken a noticeably bearish stance on MongoDB.
Looking at options history for MongoDB MDB we detected 17 trades.
If we consider the specifics of each trade, it is accurate to state that 41% of the investors opened trades with bullish expectations and 58% with bearish.
From the overall spotted trades, 2 are puts, for a total amount of $56,960 and 15, calls, for a total amount of $950,401.
Predicted Price Range
Analyzing the Volume and Open Interest in these contracts, it seems that the big players have been eyeing a price window from $260.0 to $500.0 for MongoDB during the past quarter.
Analyzing Volume & Open Interest
Looking at the volume and open interest is an insightful way to conduct due diligence on a stock.
This data can help you track the liquidity and interest for MongoDB’s options for a given strike price.
Below, we can observe the evolution of the volume and open interest of calls and puts, respectively, for all of MongoDB’s whale activity within a strike price range from $260.0 to $500.0 in the last 30 days.
MongoDB Option Volume And Open Interest Over Last 30 Days
Largest Options Trades Observed:
Symbol | PUT/CALL | Trade Type | Sentiment | Exp. Date | Strike Price | Total Trade Price | Open Interest | Volume |
---|---|---|---|---|---|---|---|---|
MDB | CALL | TRADE | BULLISH | 06/21/24 | $410.00 | $109.1K | 310 | 31 |
MDB | CALL | TRADE | BULLISH | 06/21/24 | $410.00 | $104.9K | 310 | 91 |
MDB | CALL | TRADE | BULLISH | 06/21/24 | $410.00 | $101.0K | 310 | 45 |
MDB | CALL | TRADE | BEARISH | 06/21/24 | $410.00 | $91.7K | 310 | 104 |
MDB | CALL | TRADE | BULLISH | 06/21/24 | $410.00 | $78.3K | 310 | 57 |
About MongoDB
Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users. MongoDB provides both licenses as well as subscriptions as a service for its NoSQL database. MongoDB’s database is compatible with all major programming languages and is capable of being deployed for a variety of use cases.
Following our analysis of the options activities associated with MongoDB, we pivot to a closer look at the company’s own performance.
Where Is MongoDB Standing Right Now?
- Trading volume stands at 577,702, with MDB’s price down by -0.3%, positioned at $406.46.
- RSI indicators show the stock to be may be approaching overbought.
- Earnings announcement expected in 8 days.
Professional Analyst Ratings for MongoDB
A total of 3 professional analysts have given their take on this stock in the last 30 days, setting an average price target of $452.3333333333333.
- In a positive move, an analyst from Capital One has upgraded their rating to Overweight and adjusted the price target to $427.
- Reflecting concerns, an analyst from Wells Fargo lowers its rating to Overweight with a new price target of $500.
- An analyst from Truist Securities has revised its rating downward to Buy, adjusting the price target to $430.
Options trading presents higher risks and potential rewards. Astute traders manage these risks by continually educating themselves, adapting their strategies, monitoring multiple indicators, and keeping a close eye on market movements. Stay informed about the latest MongoDB options trades with real-time alerts from Benzinga Pro.
Article originally posted on mongodb google news. Visit mongodb google news