AWS Introduces S3 Tables Bucket: Is S3 Becoming a Data Lakehouse?

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

AWS has recently announced S3 Tables Bucket, managed Apache Iceberg tables optimized for analytics workloads. According to the cloud provider, the new option delivers up to 3x faster query performance and up to 10x higher transaction rates for Apache Iceberg tables compared to standard S3 storage.

In one of his final posts on the AWS Blog, Jeff Barr, vice president and chief evangelist at AWS, writes:

Table buckets are the third type of S3 bucket, taking their place alongside the existing general purpose and directory buckets. You can think of a table bucket as an analytics warehouse that can store Iceberg tables with various schemas.

Originally developed at Netflix, Apache Iceberg is a high-performance, open-source format for large analytic tables. It allows the use of SQL tables for big data, enabling engines like Spark, Trino, Flink, Presto, and Hive to access and work with the same tables simultaneously.

Competing with services like Databricks Delta Lake and Snowflake’s external Iceberg tables, S3 Tables are designed to perform continuous table maintenance, automatically optimizing query efficiency and storage costs. Additionally, they integrate with AWS Glue Data Catalog, enabling data engineers to leverage analytics services such as Amazon Kinesis Data Firehose, Athena, Redshift, EMR, and QuickSight.

In a separate article, the cloud provider details how Amazon S3 Tables use compaction to improve query performance. Aliaa Abbas, Anupriti Warade, and Jacob Tardieu explain:

Customers often choose Apache Parquet for improved storage and query performance. Additionally, customers use Apache Iceberg to organize Parquet datasets to take advantage of its database-like features such as schema evolution, time travel, and ACID transactions.

To illustrate the benefits of automatic compaction, the team compares the query performance of an uncompacted Iceberg table in a general-purpose bucket with that of a newer, optimized table. They write:

Our results revealed significant performance improvements when using datasets compacted by S3 Tables. With compaction enabled on the table bucket, we observed query acceleration up to 3.2x, (…) overall, we saw a 2.26x improvement in the total execution time for all eight queries.

Is S3 becoming a data lakehouse?” was a common sentiment in the community when the new storage option was announced, with many developers expressing excitement. Andrew Warfield, VP and distinguished engineer at Amazon, summarizes the three main benefits:

First, tables are an important primitive for analytics on S3, and second they are quickly changing how we integrate other services with data in S3. The third one is a little more subtle and speculative but in some ways it’s the one that I think is the most interesting. It’s the idea that S3 Tables, if we get them right, may turn into a much more general primitive outside of analytics engines like Spark.

John Kutay, director of product & engineering at Striim, offers a different perspective, writing:

As a data platform vendor, I demand AWS stop building high-level S3 table APIs/catalogs, and instead build low-level convenience features for me to sell a managed data lake service.

Javi Santana, co-founder at Tinybird.co, questions the pricing:

Storage and operation costs are almost the same as regular S3. But, the main point (…) is the cost of compaction “$0.05 per GB processed”. Seems like not much but I’m checking some of our customers they process around 1PB (…) That means it’s a no-go for real-time workloads when you also want to have fast reads.

While some developers highlight missing functionalities. Francesco Mucio, owner and BI/data architect at Untitled Data Company, concludes:

To be fair, this is not the first time that AWS released a half-baked feature/tool… and some of them stayed like that. But it’s also true that, despite the marketing announcements, not all tools are for everybody.

Further extending S3 capabilities, AWS announced at re:Invent the preview of S3 Metadata, a new feature that automatically updates object metadata on S3. Read more on InfoQ.

S3 Tables Bucket is currently available in only three U.S. regions. While S3 Tables are generally available, the integration with AWS Glue Data Catalog is still in preview.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Building Safe and Usable Medical Device Software: A Conversation with Neeraj Mainkar

MMS Founder
MMS Neeraj Mainkar

Article originally posted on InfoQ. Visit InfoQ

Transcript

Shane Hastie: Good day folks, this is Shane Hastie for the InfoQ Engineering Culture podcast. Today I’m sitting down with Neeraj Mainkar. Neeraj, thanks for taking the time to talk to us today.

Neeraj Mainkar: Absolutely, my pleasure.

Shane Hastie: My normal starting point on these is who’s Neeraj?

Introductions [00:50]

Neeraj Mainkar: Sure, great question. I’m currently the VP of software engineering, and advanced technology for a company called Proprio. We are a medical device slash AI company that’s working on coming out with a navigational device that helps orthopedic surgeons ensure better outcomes in their surgery. My background, I’m actually a physicist by training. I have a PhD in computational condensed matter physics. And my last 28 years, you could probably divide that up into two broad pieces.

First 14 years of my career we’re in a small defense contracting forum. We did contract work for the army and the DHS, where I was doing a lot of physics-based simulations for a branch of the US army. After that, around I would say 2010, 2011 is when I got into medical devices. And I’ve done a lot of different range of medical devices, starting with neurological devices, infusion devices, did a stint at BD doing informatics for a big microbiology lab software. And then doing a surgical robot, and then now of course doing navigational devices.

But early in my career, I guess you’d say, I became as I was building these tools, and products, and software, and I became quite passionate about not just advanced technology that comes with the territory of being a physicist. But also how it can help to make lives better, but also importantly that whole process of how do you create software, especially software that is safety critical, mission-critical, especially in the healthcare space.

And how do we ensure that the technology that we bring to surgeons or other people in healthcare is safe and is effective, which is obviously not only required by the FDA and all other notified bodies, but also high quality and most importantly is intuitive to use with as little cognitive load on the user as possible. As you can imagine, that has been an ongoing challenge, and that’s what I’d love to talk about today.

Shane Hastie: In the realm that you’re in a software bug can kill people, how do you ensure the technical quality of the product? Let’s dig into that first.

Guidelines and regulations intended to ensure that software is safe [03:20]

Neeraj Mainkar: Yes, but this falls into the category of how do you make sure that the software is absolutely safe? So that’s where the design controls piece of the FDA guidelines and regulations comes in. Because those regulations are put out there to ensure that medical device companies like us follow certain very, very strict practices in our method of product and software development, so that we ensure that the device that we ultimately produce is safe and effective. Now, what does that actually mean? That means following a documented process.

It’s a very famous line amongst medical device workers is that if you don’t document it, it didn’t happen. So you document everything. It has to be a repeatable process so that you can ensure that you can work your way back to finding out where issues may have happened. So the process has to be documented, it has to be repeatable. And then you have to apply all of the tools in your trade to ensure that you are testing your device as widely as possible, as much in detail as possible.

And making sure that you cover as many different types of workflows that your device is going to be involved in as possible. And this is where some of the surgeon-related testing that I want to talk about comes into play. Again, to make sure that the device is safe, you got to have a very well-defined process that’s repeatable. You have to document everything that you do. And then you have to use all the tools in your trade to ensure that you’re developing your software in a way that actually prevents having bugs in them to start with. So basically making sure that by design your software is as free of bugs as possible. And then there’s a big stress on verification and validation, making sure that you verify the requirements of your device.

And then validation in our world is extremely important, and has a very special meaning, which means you have to make sure that a user group actually tells you that the device that you built actually serves the purpose for which it was built. And so those are basically some of the methodologies that we use to make sure that the software does not contain any bugs that might, as you said, kill people.

Shane Hastie: This is the antithesis of the “move fast and break things” that we hear in a lot of our industry today. How do you instill this culture equality in your teams, and in the people that you work with?

Instilling a culture of quality [06:02]

Neeraj Mainkar: Yes, that’s a great question. So one of the main things that we always tell people is that medical device is really a very, very unique field in software to be in. You have to feel that responsibility, and the seriousness that goes with creating something that’s absolutely safety critical. So training becomes a big part of a new engineer’s life in a medical device company. So as soon as they come in, you have to first make sure that they’re trained on all of the regulations, and what is expected of them in terms of documentation, in terms of attention to detail, in terms of unit testing, in terms of integration testing, and in terms of overall testing. You cannot test enough. It’s kind of like the adage that we always use. You have to make sure that you are using best practices when you’re developing code to ensure that you have not let a bug creep in.

Second, it’s basically making sure that the people that you hire are obviously good at their craft. Now, you would say that, “Well, that applies to everybody”. But as you can imagine, that applies especially on people that are working in something like medical devices, because you absolutely want to make sure that the people that are working on your systems are absolutely the best of the best, that are very, very good at what they do. And so quite frankly, the hiring process can be pretty long and arduous. So we want to make sure we have the right people, and when the right people come in, we want to train them.

And then periodically always keeping this north star in everybody’s mind about what we’re doing this for. One of the sort of informal example that I always give to people is what I call the mom test, which is you have to be yourself confident about the fact that this device that you’re working on may someday be used on a person, right?

The whole point. No, would you be okay with your mother, or replace your mother with anybody else, with some loved one in your family? Would you be okay having this device at the other end of your loved one? If the answer to that is no, then you know what you have to do. You have to continue to work hard to make sure that the device is safe, and it’s not likely to be misused, and it’s not likely to throw up any kind of errors. So those are the key features of how you make sure that people understand the seriousness of developing a medical device.

Shane Hastie: So focusing on the skills, giving people that clear understanding, the training. If you were to profile your engineers, who are they?

The attitudes and skills needed for medical device software engineering [08:55]

Neeraj Mainkar: They come from all varied backgrounds. Typically, you don’t just hire 10, 15 year experienced people in every single one of your slots. What you try to do is take a diverse demographic. You want people that obviously you want in key positions, people that are experienced having done medical devices before, and have seen the circus, and seen the challenges, and can foresee what to do and what not to do. And then they also serve as excellent mentors because me personally, I can’t train every single person that we hire. So what I normally do is I will always hire in key positions like principal architects, test managers, and development managers, people that actually have had prior medical device experience. And then when we go about hiring engineers that are actually going to develop the software and testers that are going to actually test the software, that’s when we try to cover the full spectrum of types.

Obviously, they all have to pass a certain level of technical knowledge, so we always have tests for people. We sometimes give people small little projects to do, nothing super burdensome, but absolutely making sure that they can do a trial problem for us, and then convince us that they’re actually good at what they do. Then the other part of course is the attitude. What is your approach to software development? One of the things that, and this is going to be sort of, I’m going to get on my high horse a little bit here, is where software often because of the young age of software engineering as a field really, has suffered somewhat from this issue that a lot of people can pick up software development. You know what I mean? You can pick up software development if you’re reasonably smart. You don’t have to go to school to be… Unlike a mechanical engineer, for instance, right?

To be hired as a mechanical engineer, you actually have to have a degree in mechanical engineering. In software, you will notice that that’s not always necessarily true. People come into software through many, many different fields, many of them being self-taught. So what you have to quiz on them is how methodical and how “engineering” is your approach to software engineering.

Are you laissez-faire? Are you the cowboy programmer who wants to just rush off and start building stuff, and start coding or are you methodical? And that is also one of the things that we try to inculcate in our employees, in our engineers is that medical device software is not like gaming software. You just don’t go start building stuff. You have to follow a process. You have to, surprise, surprise, wait and make sure that your architecture is actually well-defined, and actually been vetted before you start coding something.

Those kinds of really rigorous disciplined approach to engineering is something that we quiz people on, try to find their attitudes. And then something that’s obviously common to any profession is how well you work well with others. Yes, software sometimes engineering suffers from their misconception that people and programmers tend to be loners, and they just want to work by themselves. It couldn’t be farther from the truth, especially not in medical devices.

Because you have to work cross-functionally, not only with other software engineers and testers frankly, but with people from quality, people from regulatory, people, from hardware, people from systems. You have to work together. And so having that cross-functional approach to things, understanding that this medical device, the software that I’m building is part of a bigger system. And understanding your role and how you’re related to systems engineering is very, very key. Some of it you can’t always tell from interviewing people, but some of it you can engage by just asking people their prior experience and stuff like that.

Shane Hastie: So that’s the safe side of the equation. You said that there’s safe and there’s usable, and that there’s a tension between them.

Neeraj Mainkar: Absolutely.

Shane Hastie: What does usable mean in the realm of medical device software?

Usability in medical device software [13:11]

Neeraj Mainkar: Well, one simple definition of course is how intuitive it is. We use the word intuitive a lot. In fact, one of the most famous companies in our field is something called Intuitive Surgical, as you all know, the Da Vinci, robot makers. So intuitive is very key. It should be as natural, especially if you think about a workflow, where I go for when I click a button here to the next step that I need to do, should come in fairly naturally with as little training as possible. That to me is the pedestrian definition of usable.

The more technical definition might be, and again, I’m not an expert on 62366, which is the standard for usability in medical devices. But the idea is how big is the so-called cognitive load. Now how do you measure a cognitive load? Now this is done by doing, and we can get into the details of that, but when you do usability testing, you do what’s called formative testing.

Ideally what you do is to make sure that you let the user in a separate room, use your device with as little help as possible from anybody that knows. And then just see how quickly by either reading a user manual or by just simply playing with the software, how quickly does that person pick up on what needs to be done next. And how quickly do they feel comfortable using the device.

So there may not be an objective measure of that, but you can tell how somebody can be… Actually, there are objective measures in the sense that you can find out in these formative studies, how many errors did the user make? Did they get how to use this device with as little guidance as possible? So these are some of the ways we define what usability is to your question.

Shane Hastie: So designing for that low cognitive load for that high usability or intuitive usability, how do you go about that?

Designing for low cognitive load and intuitive usability [15:13]

Neeraj Mainkar: That’s the part that I recently discussed in an article that I wrote. To say that we want to make the design user-centric, you absolutely have to then by default involve the user. So if I’m going to make a navigational device that I’m hoping a surgeon would use, I need to absolutely involve surgeons in the design of the device. And what that means is that they can’t be an afterthought at the very end, they have to be involved from day one.

As soon as you’re ready to start designing your front end, you have to involve them, do a lot of mock-ups and get feedback. And this continuous feedback loop has to be set up, and we’ve been doing that now even at Proprio. We always bring in surgeons periodically to come and play with our device, use it, and then give us honest feedback as to how well they think that the usability of the device works.

Obviously, some of the challenges here are engineering, as anybody who does this knows even usability is a multi-legged stool. If you try to make one leg too long, and not make sure that the other legs are also competing in kind, you might end up sacrificing a lot of the other equally important features of software just for the sake of usability. So while usability is obviously front and center, but we also need to make sure that other aspects of that software design are also taken care of, such as things that people like users may not think about, but performance. This is something people realize when they start using it. But if for the sake of making things usable, if we make the work go way too long or way too slow, I’m sure that’ll become a problem for users as well, and they’ll complain about the fact that, “Well, Yes, it’s easy to understand, but why is the performance so bad?”

One other aspect of it is maintainability. If for the sake of usability, if we make the user interface so complex and so distributed, then maintaining it can also become a challenge. So you have to strike all these different factors just with the right amount of balance. Obviously at the end, user experience and usability is trump, but these other factors matter as well, and that’s where the push and pull in the design happens. But again, keeping surgeons in the loop, keeping users in the loop and having them test your device, use your device, give you continuous feedback while you’re still developing is key in making sure that what you end up producing at the end is actually something that the surgeons want, and can use.

Shane Hastie: Given what you’re describing there, both in terms of the safe side of it and the usable side of it, the other thing that we’re used to in the software we deal with on a daily basis, those of us who are not in the medical device space, is that this software can be updated and enhanced and improved sometimes on a daily basis, sometimes more frequently. How do you in this environment handle the upgrades and the enhancements to products that are already out there in the marketplace?

Upgrades and enhancements [18:37]

Neeraj Mainkar: That’s a very, very good question, and this is something thankfully over the years, regulating agencies such as the FDA, and the world over have started to recognize the malleable nature of software, if you will, and knowing that software can improve over time. So the FDA has an actual name for it, which is called post-market surveillance, right?

So this post-market surveillance is a responsibility that the FDA puts very seriously on medical device makers like us to say, “Okay, your job doesn’t end when you just finished your device, and then you put it out in the market. No, there is a big, big responsibility for you guys to keep doing post-market surveillance, monitoring how your device is being used, keeping a well-defined catalog of issues, bugs of course.

But also getting some feedback from users saying, ‘Hey, how would you like this device to be improved? What are some of your pain points that this device is not going to be solved, some pain points by making this device for you, but obviously may have created some new ones. So what are those? How could we continually improve your user experience?'” Because the good news is in software that’s actually entirely possible, you can continually refactor design.

In some cases it’s easier said than done, but certainly it is well within the realm of reality that you could continually improve on design, and put out maybe not as fast as some of the non-medical, and non-safety critical softwares out there. Sometimes even do updates every day. Certainly we don’t do it that frequently, but certainly within a frequency of three to six months, there is no reason why a medical device company like ours can’t keep putting out its software improvements. And how that happens is obviously through a feedback loop even after release. So we have a channel for everybody to be able to reach out to us, and tell us what the issues are with the device that they’re using.

Shane Hastie: We’re in the era of AI.

Neeraj Mainkar: Yes.

Shane Hastie: What’s happening with AI in your space?

Applications of AI in medical device software [20:49]

Neeraj Mainkar: So AI has, apart from just the buzz that’s been going around in making actual products that are AI based, and we are also one of them, we do have a large part of our device actually is very much governed by AI. But in the realm of what we’re talking about here right now with usability and safety, AI is already making a huge difference. There is not a single developer right now, even in the medical device space that doesn’t use some form of AI, whether it’s ChatGPT or some other competing tool out there to actually even create code for devices. That’s actually happening.

We are starting to use AI now for automated unit testing, because you can actually ask some tool like ChatGPT to even create unit tests for your code. It can do that faster and with more accuracy than a human being can. One of the things that we recognize, and this may sound counter to the idea, oh, that we need more and more software engineers, that when you recognize that bugs in software are created because of the human element in software development.

It is because a human is developing and because humans are prone to making errors is why we end up having so many bugs in software. That’s just the nature of things. But the more you use automated tools such as AI and all that to create unit tests, the more you will ensure the fact that these things work better. Because again, it takes that human error element out of the process.

We haven’t done it yet so far, but it is in our roadmap to start using AI to do VR in the virtual reality-based, simulation-based testing. If you use AI correctly and usability testing, you could analyze user interactions using AI tools. You can identify usability issues using AI tools. You can even have suggested design improvements using AI tools, and that’s actually being done. Those tools are being built as we speak.

You can even have AI predict potential user challenges. You could even have AI create certain user workflow alternatives that you may not have thought about that help you in making sure that, “Hey, you didn’t think about this particular alteration to the workflow that you thought”. And it just expand your universe of system-level usability testing of the device.

So frankly, the one sentence answer to that question would be the sky’s the limit. We’re just getting started with using AI all across the board, all across every layer of software engineering.

Shane Hastie: What’s the question I haven’t asked you that I should have done?

Other aspects that need to be considered [23:37]

Neeraj Mainkar: I guess talking more in terms of what the challenges are. We touched on complexity versus usability, but there are many other factors too that we haven’t talked about, right? Usability goes head-to-head with also a diversity of user base, right? The world will be so much simpler if you only always just had one kind of a user, especially in medical devices. What you have to sometimes worry about, often worry about is, okay, so I have different user profiles if you will. I’ve got the surgeon that’s going to be using my device, but it also could be a med tech that’s sitting there basically doing some pre-op planning that is a different user. There could be somebody like an OR nurse that could be using the same device that’s a different user”. And each user has to have a similar good and easy experience using our device.

So that can be pretty challenging because sometimes you create workflows that are catering to this side of the audience versus that side of the audience, and it’s striking that perfect balance where you cover every type of user equally well is always an ongoing effort.

Another challenge that we don’t always talk about but is right there is that you can try to make your systems as usable as possible, but sometimes if your systems are supposed to be integrated with other older legacy systems, it can limit your ability to some extent. I’m not saying a lot, but it can sometimes govern how much flexibility you have on your device, especially if your device is part of a pretty complex workflow where there are other devices, and other software systems that are being used to do other parts of pretty complex workflow. You know what I mean? And so that can be a challenge.

Another challenge of course is this is more of an internal challenge for people like me, which is our world as you mentioned earlier today. There’s just so much technology and new tools and tricks that keep coming into our view, and in our field of view so to speak. And sometimes being smart about all this is also making sure that we don’t keep following the new shiniest tool that’s available out there, trying to focus more on user needs rather than leveraging all these exciting new technologies.

As engineers we all want to play with whatever’s out there, and want to include that in our device because we just want to play with it. And having to control that urge, and making sure that the user need is the prime thing. And whatever else you do should be in service of that, and not the other way around where, “Oh, there’s this great new technology that exists out there, and I want to figure a way out to use it”.

Kind of like that how a hammer everything looks like a nail situation. You want to wire pick that, and that’s more on the engineering side. And then one thing that’s recent I would say is sometimes these AI-based tools can also come with their own challenges. Specifically, what I mean by that is there’s just so much data overload now because modern software often in the interest of trying to give surgeons or any other users as much information as possible. Because now data is available everywhere and databases are cheap, and so easy to just store so much data.

But sometimes happen that there’s just an overload of data that users are now having to look at. And while on the face of it, that may seem like a great problem to have, it can also be overwhelming and can be confusing if it’s not arranged, and is not presented properly. So information architecture, data visualization, and contextualizing what you’re seeing in any a software interface has become much more upper responsibility for people in my place, or engineers in developing medical device software because that is very much a thing.

There’s so much data. As an example here in Proprio, we collect literally a terabyte of data every time we do a surgery. And now that’s a lot of data. It’s on us to now use that and present that in a way that’s not overwhelming to the surgeon and other users and also useful. We don’t want to go the other way and either where we suppress any kind of data, that’s not what we want to do either. We also want to be striking the right balance between giving them the right type of data that they can then use to do decision-making and hide the rest. So those are some of the other challenges that I think people may not be thinking about when you’re trying to develop something that’s very usable.

Shane Hastie: Neeraj, wonderful insight into the world of software engineering in spaces where a bug can kill, and where you are truly making a difference in people’s lives. If people want to continue the conversation, where can they find you?

Neeraj Mainkar: You can reach me [on LinkedIn] at nmainkar, that’s basically my first initial N of my first name and my last name, which is Mainkar. You can write to me, and happy to answer any questions anybody might have.

Shane Hastie: Thank you so much.

Neeraj Mainkar: Thank you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2 Top Tech Stocks Under $250 to Buy in 2025 – The Globe and Mail

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Artificial intelligence (AI) has taken the world by storm. It is transforming the technology industry through advancing innovation, efficiency, and new opportunities across multiple sectors. Top tech companies such as Amazon (AMZN), Microsoft (MSFT), and Nvidia (NVDA) continue to control the stock market.

Aside from these well-known tech stocks, there are a few other undervalued options under $250 that could be valuable additions to your portfolio in 2025. With a market capitalization of $466 billion, Oracle (ORCL) has been a long-standing player in the tech industry. 

Meanwhile, Goldman Sachs analysts, led by Ryan Hammond, believe platform stocks such as MongoDB (MDB) could be the “primary beneficiaries of the next wave of generative AI investments.” Let’s see if now is a good time to buy these great tech stocks.

Tech Stock #1: Oracle Corporation

The first tech stock on my list is Oracle Corporation, the largest enterprise-grade database and application software provider. Its products and services include Oracle Database, Oracle Fusion Cloud, and Oracle Engineered Systems, among others. 

The company’s better-than-expected earnings in 2024 boosted investor confidence, leading to a surge in its stock price. The stock soared 60.1% in 2024, outperforming the S&P 500 Index’s ($SPX)gain of 24%

A graph of stock marketDescription automatically generated with medium confidence
www.barchart.com

The company operates in three segments. The cloud and license segment generates the majority of Oracle’s revenue. It includes Oracle cloud service subscriptions as well as on-premises software license support. The company’s cloud offerings include Oracle Cloud Infrastructure (OCI) Fusion Cloud Applications, among others. The other two segments are hardware, which includes Oracle Engineered Systems, and services, which assist customers in optimizing the performance of their Oracle applications and infrastructure. 

Oracle has integrated AI capabilities into its cloud services and applications, thereby enhancing their functionality and appeal. In the third quarter, total revenue increased 9% to $14.1 billion, with the cloud and license segment up 11% year-over-year. Adjusted earnings per share increased 10% to $1.47. The total remaining performance obligation (RPO), which refers to contracted revenue that has yet to be earned, increased by 49% to $97 billion. 

Oracle also pays dividends, which adds to its appeal to income investors. It yields 0.95%, compared to the technology sector’s average of 1.37%. Its low payout ratio of 19.5% also makes the dividend payments sustainable for now. 

The global cloud computing market is expected to reach $2.29 trillion by 2030. Oracle’s investments in cloud infrastructure and applications position it to benefit from this growth. However, it operates in a highly competitive environment, with rivals such as Microsoft Azure, Amazon’s AWS, and Google (GOOGL) Cloud, which together account for 63% of the cloud market. Oracle owns just 3% of this market.

Oracle’s prospects are dependent on its ability to implement its growth strategy effectively. Sustained double-digit growth in cloud services is critical to maintaining investor confidence. 

At the end of the quarter, Oracle’s balance sheet showed cash, cash equivalents, and marketable securities totaling $11.3 billion. The company also generated free cash flow of $9.5 billion, which allowed it to effectively manage its debt while funding acquisitions and returning capital to shareholders via dividends. 

While Oracle’s balance sheet remains strong, competitors such as Amazon and Microsoft have significant capital and resources, posing a constant threat. 

Analysts that cover Oracle stock expect its revenue and earnings to increase by 8.9% and 10.7% in fiscal 2025. Revenue and earnings are further expected to grow by 12.5% and 14.5%, respectively, in fiscal 2026. Trading at 23x forward 2026 earnings, Oracle is a reasonable tech stock to buy now, backed by its strong financial performance, competitive advantages, and exposure to high-growth markets. 

What Does Wall Street Say About ORCL Stock?

Overall, analysts’ ratings for Oracle are generally positive, with 20 maintaining a “Strong Buy” or “Outperform” rating out of the 32 analysts covering the stock. Plus, 11 analysts recommend a “Hold,” and one suggests a “Strong Sell.” The average target price for Oracle stock is $193.63, representing potential upside of 16.2% from its current levels. The high price estimate of $220 suggests the stock can rally as much as 32% this year. 

A screenshot of a computerDescription automatically generated
www.barchart.com

Tech Stock#2: MongoDB

The second on my list is an emerging AI company, MongoDB. With a market cap of $17.3 billion, MongoDB is emerging as a leading name in the database management space. Its business is built around database solutions, with MongoDB Atlas serving as its flagship product, a cloud-based database-as-a-service (DBaaS). Atlas has been deployed on major cloud providers such as AWS, Azure, and Google Cloud, accounting for a majority of MongoDB’s revenue, and has been a key driver of its growth.

MongoDB stock has fallen 36% over the past 52 weeks compared to the broader market’s 24% gain. This dip could be a great buying opportunity, as Wall Street expects the stock to soar this year. 

www.barchart.com

In the third quarter of fiscal 2025, total revenue increased by an impressive 22% year on year to $529.4 million, with Atlas revenue growing by 26%. The company’s subscription-based revenue model guarantees a consistent stream of recurring income, which increased by 22% in the quarter.

MongoDB offers consulting, training, and implementation services to help businesses make the most of their database solutions. Services revenue increased by 18% to $17.2 million in Q3. Adjusted earnings per share stood at $1.16, an increase of 20.8% from the prior-year quarter.

Compared to $1.68 billion in revenue and EPS of $3.33 in fiscal 2024, management expects fiscal 2025 revenue of $1.975 billion and adjusted EPS of $3.02. Analysts predict that the company’s revenue will increase by 17.6%, but earnings may fall to $3.05, higher than the company’s estimate.

However, in fiscal 2026, the company’s earnings could increase by 9.3% to $3.33 per share, followed by a 17.2% increase in revenue. MDB stock is trading at seven times forward 2026 sales, compared to its five-year historical average of 21x.

What Does Wall Street Say About MDB Stock?

Overall, Wall Street rates MDB stock a “Moderate Buy.” Out of the 32 analysts covering the stock, 22 rate it a “Strong Buy,” three suggest it’s a “Moderate Buy,” five rate it a “Hold,” and two recommend a “Strong Sell.”

The average target price for MDB stock is $378.86, representing potential upside of 62.7% from its current levels. The high price estimate of $430 suggests the stock can rally as much as 84.7% this year. 

A screenshot of a computerDescription automatically generated

On the date of publication, Sushree Mohanty did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Improving Threads’ iOS Performance at Meta

MMS Founder
MMS Sergio

Article originally posted on InfoQ. Visit InfoQ

An app’s performance is key to make users want to use it, say Meta engineers Dave LaMacchia and Jason Patterson. This includes making the app lightning-fast, battery-efficient, and reliable across a range of devices and connectivity conditions.

To improve Threads performance, Meta engineers measured how fast the app launches, how easy it is to post a photo or video, how often it crashes, and how many bug reports people filed. To this aim, they defined a number of metrics: frustrating image-render experience (FIRE), time-to-network content (TTNC), and creation-publish success rate (cPSR).

FIRE is the percentage of people who experience a frustrating image-render experience, which may lead to them leaving the app while the image is rendering across the network. Roughly, FIRE is defined as the quotient of the number of users leaving the app before an image is fully rendered by the sum of all users attempting to display that image. Measuring this metric allows Threads developers to detect any regressions in how images are loading for users.

Time-to-network content (TTNC) is roughly the time required for the app to launch and display the user’s feed. Long loading time is another experience killer that may lead users to abandon the app. Reducing the app’s binary size is paramount to keeping the binary small:

Every time someone tries to commit code to Threads, they’re alerted if that code change would increase our app’s binary size above a configured threshold.

Additionally, they removed unnecessary code and graphics assets from the app bundle, resulting in a binary one-quarter the size of Instagram.

As to navigation latency, this is possibly even more critical than launch time. Meta engineers carried through A/B tests to find out that:

With the smallest latency injection, the impact was small or negligible for some views, but the largest injections had negative effects across the board. People would read fewer posts, post less often themselves, and in general interact less with the app.

To ensure that no changes cause a regression in navigation latency, Meta engineers created SLATE, a logger system that tracks relevant events like triggers of a new navigation, the UI being built, activity spinners, and content from the network or an error being displayed.

It’s implemented using a set of common components that are the foundation for a lot of our UI and a system that measures performance by setting “markers” in code for specific events. Typically these markers are created with a specific purpose in mind.

Creation-publish success rate (cPSR) measures how likely it is for an user to successfully complete the process of posting some content. On iOS, posting a video or large photo is especially tricky, since the user could background the app after posting their content without waiting for the upload to complete, in which case the app may be terminated by the OS.

Here, the approach taken by Meta was aimed at improving the user experience in those cases when posting failed. This was accomplished by introducing a new feature, called Drafts, to allow users to manage failed posts in more flexible ways instead of just providing the option to retry or abort the operation.

We discovered that 26 percent fewer people submitted bug reports about posting if they had Drafts. The feature was clearly making a difference.

Another approach was trying to reduce perceived latency, as opposed to absolute latency, showing a request has been received when the data upload completes but before it’s been processed and published.

Last but not least, Meta engineers saw a great improvement in app stability after they adopted Swift’s complete concurrency, which, they say, does a great job at preventing data races and reducing hard-to-debug problems caused by data races.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Inc (MDB) Shares Up 4.42% on Jan 2 – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB Inc (MDB, Financial) surged 4.42% in mid-day trading on Jan 2. The stock reached an intraday high of $247.00, before settling at $243.10, up from its previous close of $232.81. This places MDB 52.30% below its 52-week high of $509.62 and 14.27% above its 52-week low of $212.74. Trading volume was 1,281,051 shares, 56.7% of the average daily volume of 2,257,831.

Wall Street Analysts Forecast

1874864343175294976.png

Based on the one-year price targets offered by 32 analysts, the average target price for MongoDB Inc (MDB, Financial) is $377.12 with a high estimate of $520.00 and a low estimate of $180.00. The average target implies an upside of 55.13% from the current price of $243.10. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 35 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 2.1, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $506.50, suggesting a upside of 108.35% from the current price of $243.1032. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB’s Future Surge! Why Tech Investors Are Buzzing – Mi Valle

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

In an era where data reigns supreme, MongoDB Inc. (NASDAQ: MDB) is capturing the spotlight of tech-savvy investors, promising unprecedented growth within the evolving landscape of database management solutions. As businesses transition from traditional SQL databases to more flexible, scalable solutions, MongoDB’s NoSQL architecture is emerging as a vital asset for modern enterprises.

The reason for this newfound investor enthusiasm goes beyond MongoDB’s adaptability. With the rise of machine learning, artificial intelligence, and IoT technologies, organizations are increasingly prioritizing solutions that can handle diverse and unstructured data. MongoDB’s document-oriented database model is tailor-made for such demands, offering versatility that rigid relational databases struggle to match.

Furthermore, MongoDB’s strategic partnerships and expansion efforts, including collaborations with cloud giants like AWS and Google Cloud, have bolstered its market position. This integration with leading cloud platforms positions MongoDB as a prime player in the cloud-native database sector, an industry projected for substantial growth over the next few years.

What sets MongoDB apart is not just its technological prowess, but its commitment to innovation and adaptation. Initiatives like MongoDB Atlas, its cloud-based Database as a Service (DBaaS), reflect its dedication to staying ahead in the data solutions arena.

For forward-thinking investors, MongoDB represents a compelling prospect—a company not just keeping pace with today’s data demands but setting the stage for tomorrow’s advancements. With an expanding market and technological relevance, MongoDB stocks are quickly becoming a top consideration for those chasing the future of data.

Why MongoDB is the Future of Data Management: Insights and Innovations

In today’s data-driven world, MongoDB Inc. (NASDAQ: MDB) is quickly becoming a standout choice for investors focused on the transformative future of database management solutions. While the underlying buzz around MongoDB stems from its adaptive NoSQL architecture, there are several additional aspects driving its rise in prominence worth exploring.

### MongoDB: Features and Innovations

MongoDB’s document-oriented database model addresses the demands of handling diverse, unstructured data by providing unparalleled versatility. This feature makes it especially suitable for businesses delving into machine learning, artificial intelligence, and IoT applications. The platform’s ability to manage vast and varied datasets gives it an edge over traditional relational databases which typically lack such flexibility.

#### Key Features:

– **Document-Based Storage:** Allows more flexible data structures.
– **Scalability and Performance:** Easily accommodates growing data needs.
– **Cloud Integration:** Seamlessly works with AWS and Google Cloud services.

### MongoDB Atlas: Cloud-Based Solution

One of MongoDB’s standout innovations is MongoDB Atlas, a cloud-based Database as a Service (DBaaS) offering. This solution underscores MongoDB’s commitment to pioneering cloud-native data management solutions, facilitating easier and more efficient interactions with cloud infrastructures. By providing automated operational tasks like backups, scaling, and updates, MongoDB Atlas helps organizations manage their data pipelines with reduced overhead, placing it at the forefront of cloud data services.

### Strategic Partnerships and Market Expansion

The strategic alliances MongoDB has formed, particularly with cloud service giants like AWS and Google Cloud, provide it with a fortified position in the cloud-native database market. Such partnerships enhance MongoDB’s capabilities and expand its reach, making it a crucial tool for enterprises venturing into the technological future. The cloud-native database industry is projected for significant growth, further increasing MongoDB’s attractiveness as a long-term investment prospect.

### Predictions and Market Trends

As businesses continue to prioritize data-driven decision-making, the demand for databases that can support big data analytics and innovative tech solutions will soar. MongoDB is strategically placed to capture a substantial share of this market. Analysts predict a strong upward trend in MongoDB’s growth trajectory, driven by its pioneering approach to database management and sustained investment in innovation and partnerships.

### Conclusion

For those seeking investment opportunities at the intersection of technology and data management, MongoDB presents a compelling prospect. With its robust features, cloud-first orientation, and innovative spirit, MongoDB is not only adapting to today’s data demands but is also setting the stage for future advancements in the field. As the data landscape continues to evolve, MongoDB’s strategic positioning makes it a frontrunner in the pursuit of next-gen database solutions.

For more information, visit the official MongoDB website: MongoDB

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2 Top Tech Stocks Under $250 to Buy in 2025 – The Globe and Mail

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Artificial intelligence (AI) has taken the world by storm. It is transforming the technology industry through advancing innovation, efficiency, and new opportunities across multiple sectors. Top tech companies such as Amazon (AMZN), Microsoft (MSFT), and Nvidia (NVDA) continue to control the stock market.

Aside from these well-known tech stocks, there are a few other undervalued options under $250 that could be valuable additions to your portfolio in 2025. With a market capitalization of $466 billion, Oracle (ORCL) has been a long-standing player in the tech industry. 

Meanwhile, Goldman Sachs analysts, led by Ryan Hammond, believe platform stocks such as MongoDB (MDB) could be the “primary beneficiaries of the next wave of generative AI investments.” Let’s see if now is a good time to buy these great tech stocks.

Tech Stock #1: Oracle Corporation

The first tech stock on my list is Oracle Corporation, the largest enterprise-grade database and application software provider. Its products and services include Oracle Database, Oracle Fusion Cloud, and Oracle Engineered Systems, among others. 

The company’s better-than-expected earnings in 2024 boosted investor confidence, leading to a surge in its stock price. The stock soared 60.1% in 2024, outperforming the S&P 500 Index’s ($SPX)gain of 24%

A graph of stock marketDescription automatically generated with medium confidence
www.barchart.com

The company operates in three segments. The cloud and license segment generates the majority of Oracle’s revenue. It includes Oracle cloud service subscriptions as well as on-premises software license support. The company’s cloud offerings include Oracle Cloud Infrastructure (OCI) Fusion Cloud Applications, among others. The other two segments are hardware, which includes Oracle Engineered Systems, and services, which assist customers in optimizing the performance of their Oracle applications and infrastructure. 

Oracle has integrated AI capabilities into its cloud services and applications, thereby enhancing their functionality and appeal. In the third quarter, total revenue increased 9% to $14.1 billion, with the cloud and license segment up 11% year-over-year. Adjusted earnings per share increased 10% to $1.47. The total remaining performance obligation (RPO), which refers to contracted revenue that has yet to be earned, increased by 49% to $97 billion. 

Oracle also pays dividends, which adds to its appeal to income investors. It yields 0.95%, compared to the technology sector’s average of 1.37%. Its low payout ratio of 19.5% also makes the dividend payments sustainable for now. 

The global cloud computing market is expected to reach $2.29 trillion by 2030. Oracle’s investments in cloud infrastructure and applications position it to benefit from this growth. However, it operates in a highly competitive environment, with rivals such as Microsoft Azure, Amazon’s AWS, and Google (GOOGL) Cloud, which together account for 63% of the cloud market. Oracle owns just 3% of this market.

Oracle’s prospects are dependent on its ability to implement its growth strategy effectively. Sustained double-digit growth in cloud services is critical to maintaining investor confidence. 

At the end of the quarter, Oracle’s balance sheet showed cash, cash equivalents, and marketable securities totaling $11.3 billion. The company also generated free cash flow of $9.5 billion, which allowed it to effectively manage its debt while funding acquisitions and returning capital to shareholders via dividends. 

While Oracle’s balance sheet remains strong, competitors such as Amazon and Microsoft have significant capital and resources, posing a constant threat. 

Analysts that cover Oracle stock expect its revenue and earnings to increase by 8.9% and 10.7% in fiscal 2025. Revenue and earnings are further expected to grow by 12.5% and 14.5%, respectively, in fiscal 2026. Trading at 23x forward 2026 earnings, Oracle is a reasonable tech stock to buy now, backed by its strong financial performance, competitive advantages, and exposure to high-growth markets. 

What Does Wall Street Say About ORCL Stock?

Overall, analysts’ ratings for Oracle are generally positive, with 20 maintaining a “Strong Buy” or “Outperform” rating out of the 32 analysts covering the stock. Plus, 11 analysts recommend a “Hold,” and one suggests a “Strong Sell.” The average target price for Oracle stock is $193.63, representing potential upside of 16.2% from its current levels. The high price estimate of $220 suggests the stock can rally as much as 32% this year. 

A screenshot of a computerDescription automatically generated
www.barchart.com

Tech Stock#2: MongoDB

The second on my list is an emerging AI company, MongoDB. With a market cap of $17.3 billion, MongoDB is emerging as a leading name in the database management space. Its business is built around database solutions, with MongoDB Atlas serving as its flagship product, a cloud-based database-as-a-service (DBaaS). Atlas has been deployed on major cloud providers such as AWS, Azure, and Google Cloud, accounting for a majority of MongoDB’s revenue, and has been a key driver of its growth.

MongoDB stock has fallen 36% over the past 52 weeks compared to the broader market’s 24% gain. This dip could be a great buying opportunity, as Wall Street expects the stock to soar this year. 

www.barchart.com

In the third quarter of fiscal 2025, total revenue increased by an impressive 22% year on year to $529.4 million, with Atlas revenue growing by 26%. The company’s subscription-based revenue model guarantees a consistent stream of recurring income, which increased by 22% in the quarter.

MongoDB offers consulting, training, and implementation services to help businesses make the most of their database solutions. Services revenue increased by 18% to $17.2 million in Q3. Adjusted earnings per share stood at $1.16, an increase of 20.8% from the prior-year quarter.

Compared to $1.68 billion in revenue and EPS of $3.33 in fiscal 2024, management expects fiscal 2025 revenue of $1.975 billion and adjusted EPS of $3.02. Analysts predict that the company’s revenue will increase by 17.6%, but earnings may fall to $3.05, higher than the company’s estimate.

However, in fiscal 2026, the company’s earnings could increase by 9.3% to $3.33 per share, followed by a 17.2% increase in revenue. MDB stock is trading at seven times forward 2026 sales, compared to its five-year historical average of 21x.

What Does Wall Street Say About MDB Stock?

Overall, Wall Street rates MDB stock a “Moderate Buy.” Out of the 32 analysts covering the stock, 22 rate it a “Strong Buy,” three suggest it’s a “Moderate Buy,” five rate it a “Hold,” and two recommend a “Strong Sell.”

The average target price for MDB stock is $378.86, representing potential upside of 62.7% from its current levels. The high price estimate of $430 suggests the stock can rally as much as 84.7% this year. 

A screenshot of a computerDescription automatically generated

On the date of publication, Sushree Mohanty did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Database Trends: A 2024 Review and a Look Ahead – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="AI / Cloud Services / Data“><meta name="x-tns-authors" content="“>


Database Trends: A 2024 Review and a Look Ahead – The New Stack


<!– –>

As a JavaScript developer, what non-React tools do you use most often?

Angular

0%

Astro

0%

Svelte

0%

Vue.js

0%

Other

0%

I only use React

0%

I don’t use JavaScript

0%

2025-01-02 06:28:53

Database Trends: A 2024 Review and a Look Ahead

Here’s a round-up of the big database influences in 2024 — like vector store, GraphQL, and open table formats — and what they portend for 2025.


Jan 2nd, 2025 6:28am by


Featued image for: Database Trends: A 2024 Review and a Look Ahead

Image by Diana Gonçalves Osterfeld.

For databases, 2024 was a year for both classic capabilities and new features to take priority at enterprises everywhere. As the pressure increased for organizations to operate in a data-driven fashion, new and evolving data protection regulations all over the world necessitated better governance, usage, and organization of that data.

And, of course, the rise of artificial intelligence shined an even brighter light on the importance of data accuracy and hygiene, enabling accurate, customized AI models, and contextualized prompts, to be constructed. Key to managing all of this have been the databases themselves, be they relational, NoSQL, multimodel, specialized, operational, or analytical.

In 2025, databases will continue to grow in importance, and that’s not going to stop. Even if today’s incumbent databases are eventually eclipsed, the need for a category of platforms that optimize the storage and retrieval of data will persist because they are the essential infrastructure for high-value, intelligent systems.

Remember, there’s nothing magic or ethereal about “data.” They are simply point-in-time recordings of things that happened, be it temperatures that changed, purchases that were made, links that were clicked, or stock levels that went up or down. Data is just a reflection of all the people, organizations, machines, and processes in the world. Tracking the current, past, and expected future state of all these entities, which is what databases do, is a timeless requirement.

The most dominant database platforms have been with us for decades and have achieved that longevity by adopting new features reflecting the tech world around them while staying true to their core mission of storage and querying with the best possible performance.

Decades ago, it was already apparent that all business software applications were also database applications, and that’s no less true today. But now that truth has expanded beyond applications to include embedded software at the edge for IoT; APIs in the cloud for services and SaaS offerings, and special cloud infrastructure for AI inferencing and retrieval.

What’s Our Vector, Victor?

One big change of late, and one that will continue into 2025, is the use of databases to store so-called vectors. A vector is a numerical representation of something complex. In physics, a vector can be as simple as a magnitude paired with a direction. In the data science world, a vector can be a concatenated encoding of machine learning model feature values.

In the generative AI world, the complex entities represented by vectors include the semantics and content of documents, images, audio, and video files, or pieces (“chunks”) thereof. A big trend that started in past years but gained significant momentum in 2024 and that will increase in 2025 is the use of mainstream databases to store, index, search and retrieve vectors. Some databases are serving as platforms on which to generate these vector embeddings, as well.

This goes beyond the business-as-usual practice of operational database players adding to their feature bloat. In this case, it’s competitive move meant to counter vector database pure-play vendors like Pinecone, Zilliz, Weaviate and others. The big incumbent database platforms, including Microsoft SQL Server, Oracle Database, PostgreSQL, and MySQL on the relational side, and MongoDB, DataStax/Apache Cassandra, Microsoft Cosmos DB, and Amazon DocumentDB/DynamoDB on the NoSQL/multimodel side, have all added vector capabilities to their platforms.

These capabilities usually start with the addition of a few functions to the platform’s SQL dialect to determine vector distance and then extend to support for a native VECTOR data type, including string and binary implementations. Many platforms are also adding explicit support for the retrieval augmented generation (RAG) programming pattern that uses vectors to contextualize the prompts sent to large language models (LLMs).

Where does this leave specialist vector databases? It’s hard to say. While those platforms will emphasize their higher-end features, the incumbents will point out that using their platforms for vector storage and search will help avoid the complexity that adopting an additional, purpose-specific database platform can bring.

GenAI for Developers and Analysts

Vector capabilities are not the only touch point between databases and generative AI, to be sure. Sometimes, the impact of AI is not on the database itself but rather on the tooling around it. In that arena, the biggest tech at the intersection of databases and generative AI (GenAI) is a range of natural language-to-SQL interfaces. The ability to query a database using natural language is now so prevalent that it has become a “table stakes” capability. But there’s a lot of room left for innovation here.

For example, Microsoft provides not just a chatbot for generating SQL queries but also allows inline comments in SQL script to function as GenAI prompts that can generate whole blocks of code. It also provides for code completion functionality that pops up on the fly right as developers are composing their code.

On the analytics side, Microsoft Fabric Copilot technology lends a hand in notebooks, pipelines, dataflows, real-time intelligence assets, data warehouses and Power BI, both for reports and DAX queries. DAX — or Data Analysis eXpressions — is Microsoft’s specialized query language for Power BI and the Tabular mode of SQL Server Analysis Services. It’s notoriously hard to write in the opinion of many (including this author), and GenAI technology makes it much more accessible.

Speaking of BI, analytical databases have AI relevance, too. In fact, in July of this year, OpenAI acquired Rockset, a company with one such platform, based on the open source RocksDB project, to accelerate its platform’s retrieval performance. Snowflake, a relational cloud data warehouse-based platform, also supports a native VECTOR type along with vector similarity functions, and its Cortex Search engine supports vector search operations. Snowflake also supports six different AI embedding models directly within its own platform. Other data warehouse platforms support vector embeddings, including Google BigQuery. On the data lakehouse side, Databricks is in the vector game, too.

‘OLxP’

Staying with analytical databases for a minute, another trend to watch for in 2025 will be that of bringing analytical and operational databases together. This fusion of OLTP (online transactional processing) and OLAP (online analytical processing) sometimes gets called operational analytics. It also garners names like “translytical,” and HTAP (hybrid transactional/analytical processing).

No matter what you call it, many vendors are bullish on the idea. This includes SAP, whose HANA platform was premised on it, and SingleStore, whose very name (changed from MemSQL in 2020) references the platform’s ability to handle both. Snowflake’s Unistore and Hybrid Tables features are designed for this use case as well. Databricks’ Online Tables also use a rowstore structure, though they’re designed for feature store and vector store operations, rather than OLTP.

Not everyone is enamored of this concept, however. For example, MongoDB announced in September of this year that its Atlas Data Lake feature, which never made it out of preview, is being deprecated. MongoDB seems to be the lone contrarian here, though.

Data APIs and Mobile

That’s not the only area where MongoDB has retreated from territory where others have gone running in. MongoDB also announced the deprecation of its Atlas GraphQL API. Meanwhile, Oracle REST Data Services (ORDS), Microsoft’s Database API Builder, and AWS AppSync add GraphQL capabilities to Oracle Autonomous Database/Oracle 23ai, Microsoft Azure SQL Database/Cosmos DB/SQL Database for Postgres, and Amazon DynamoDB/Aurora/RDS, respectively.

What about mobile databases? At one time, they were a major focus area for Couchbase, for Microsoft, and for MongoDB, with its Atlas Device SDKs/Realm platform. Couchbase Mobile is still a thing, but Microsoft Azure SQL Edge at least nominally shifts the company’s focus to IoT use cases, and MongoDB has officially deprecated Atlas Realm, its Device SDKs and Device Sync (though the on-device mobile database will continue to exist as an open source project). It’s starting to look like purpose-built small-footprint databases, including SQLite, and perhaps Google‘s Firebase, have withstood the shakeout here. Clearly, using one database platform for every single use case is not always an efficacious choice.

Multimodel or Single Platform?

Is the same true for NoSQL/multimodel databases, or can conventional relational databases be a one-stop shop for customers’ needs? It’s hard to say. Platforms like SQL Server, Oracle and Db2 added graph capabilities in past years, but adoption of them would seem to be modest. Platforms like MongoDB and Couchbase still dominate the document store world. Cassandra and DataStax are still big in the column-family category, and Neo4j, after years of competitive challenges, still seems to be king of the graph databases.

But the RDBMS jack-of-all-trades phenomenon isn’t all a mirage. Mainstream, relational databases have bulked up their native JSON support immensely, with Microsoft having introduced in preview this year a native JSON data type on Azure SQL Database, and Azure SQL Managed Instance. Microsoft also announced at its Ignite conference in November that SQL Server 2025 (now in private preview) will support a native JSON data type as well.

Oracle Database, MySQL, Postgres and others have for some time now had robust JSON implementations too. And even if full-scale graph implementations in mainstream databases have had lackluster success, various in-memory capabilities in the major database platforms have nicely ridden out the storm.

Multimodel NoSQL has shown real staying power as well. Microsoft’s Cosmos DB supports document, graph, column-family, native NoSQL and full-on Postgres relational capabilities, in a single umbrella platform. Similarly, DataStax explicitly supports column-family, tabular, graph and vector, while Couchbase supports document and key-value modes.

Data Lakes, Data Federation and Open Table Formats

One last area to examine is that of data virtualization and federation, along with increasing industry-wide support for open table formats. The requirement of cross-data-source querying has existed for some time. Decades ago, the technology existed in client-server databases for querying external databases, with technologies like Oracle Database Links, Sybase Remote Servers and Microsoft SQL Server linked servers. Similarly, a killer feature of Microsoft Access over 30 years ago was its Jet database engine’s remote table functionality, which could connect to data in CSV files, spreadsheets, and other databases.

With the advent of Hadoop and data in its storage layer (i.e., what later came to be known as data lakes), bridging conventional databases to “big data” became a priority, too. Starting with a technology called SQL-H in the long-gone Aster Database, acquired by Teradata in 2011, came the concept of an “external” table. By altering the CREATE TABLE syntax, a logical representation of the remote table could be created without physically copying it, but the query engine could still treat it as part of the local database.

Treating remote data as local is also called data virtualization. Joining a remote table and a local one (or multiple remote tables across different databases) in a single SQL SELECT is called executing a federated query. To varying degrees, both data virtualization and federation have been elegant in theory but often lacking in performance.

To help address this, open table formats have come along, and this year, they have become very important. The top contenders are Delta Lake and Apache Iceberg, with Apache Hudi coming in as somewhat of an also-ran. In practical terms, all three are based on Apache Parquet. Unlike CSV, other text-based file formats, or even Parquet itself, data stored in open table formats can be queried, updated and managed in a high-performance, high-consistency manner.

In fact, for its Fabric platform, Microsoft reworked both its original data warehouse engine and its Power BI engine to use Delta Lake as a native format. Snowflake did the same with Iceberg and other vendors have followed suit. Meanwhile, there are still a variety of database platforms that connect to data stored in open table formats as external tables, rather than as truly native ones.

Next year, look for open table format support to become increasingly more robust, and get ready to devise a data strategy based upon it. With support for these formats, there’s a good chance that many, if not most, database engines will be able to share the same physical data stored in these formats, query it at native speeds, and operate on it in both a read and write capacity. Proprietary formats may slowly be giving way, and platforms in the future may succeed on their innate capabilities more than the feedback loop between their dominance and resulting data gravity.

Feature and Skillset Equilibrium

Ultimately, a single database platform cannot be all things to all customers and users. Many of the incumbent platforms try to support the full complement of use cases, but most end up having a litany of new features added over the years that turn out to be more gimmick than mainstay.

Luckily, the new-fangled features in incumbent databases tend to go through a Darwinist process, eventually distilling down to a core set of capabilities that likely won’t achieve parity with those of specialized database platforms but will nonetheless be sufficient for a majority of customers’ needs. As the superfluous capabilities are whittled away, important workhorse features get onboarded, adopted, and added to the mainstream database and application development canon.

The market works as it should: incumbents add features in response to competitive pressures from new players that they likely would not have added on their own initiative. This allows room for innovative new players but also lets customers who wish to leverage their investments in existing platforms do just that.

It’s interesting how the fundamentals of relational algebra, SQL queries, and the like have stayed relevant over decades, but the utility, interoperability and applicability of databases keep increasing. It means the skillsets and technologies are investments that are not only safe but also, like good financial investments, grow in value and pay dividends.

Some customers will want a first-mover advantage, and so brand-new, innovative platforms will appeal to them. But many customers won’t want to re-platform or trade away their skillset investments and will prefer that vendors widen the existing road rather than introduce detours in it. Those customers should demand and place their bets with the vendors that embrace such approaches. And in 2025, they should expect vendors to welcome and accommodate them.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.

TNS owner Insight Partners is an investor in: SingleStore, Databricks.







Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The 9 Largest NYC Tech Startup Funding Rounds of December 2024 – AlleyWatch

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Armed with some data from our friends at CrunchBase, I broke down the largest NYC Startup funding rounds in New York for December 2024. The analysis includes supplementary details like industry, company descriptions, round types, and total equity funding raised to provide a more comprehensive view of the venture capital landscape in NYC.


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



9. Cofactr $17.2M
Round: Series A
Description: Cofactor allows companies to optimize every step of their electronics supply to manufacturing journey. Founded by Matthew Haber and Phillip Gulley in 2021, Cofactr has now raised a total of $28.4M in total equity funding and is backed by Y Combinator, Pioneer Fund, Correlation Ventures, Bain Capital Ventures, and DNX Ventures.
Investors in the round: Bain Capital Ventures, Broom Ventures, DNX Ventures, Floating Point, Y Combinator
Industry: Software, Supply Chain Management
Founders: Matthew Haber, Phillip Gulley
Founding year: 2021
Total equity funding raised: $28.4M
AlleyWatch’s exclusive coverage of this round: Cofactr Raises $17.2M to Help Aerospace and Defense Companies Navigate Complex Supply Chains


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



8. Stainless $25.0M
Round: Series A
Description: Stainless is building the platform for high-quality, easy-to-use APIs Founded by Alex Rattray in 2021, Stainless has now raised a total of $28.5M in total equity funding and is backed by Sequoia Capital, Felicis, The General Partnership, Zapier, and MongoDB Ventures.
Investors in the round: Felicis, MongoDB Ventures, Sequoia Capital, The General Partnership, Zapier
Industry: Developer APIs, Developer Platform, Developer Tools, Enterprise Software, SaaS
Founders: Alex Rattray
Founding year: 2021
Total equity funding raised: $28.5M


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



7. Sollis Health $33.0M
Round: Series B
Description: Sollis Health offers concierge medical centers that provide on-demand care for families, including same-day visits and virtual care options. Founded by Andrew Olanow, Benjamin Kruger, and Dr. Bernard Kruger in 2017, Sollis Health has now raised a total of $80.4M in total equity funding and is backed by Foresite Capital, Torch Capital, Montage Ventures, Arkitekt Ventures, and Friedom Partners.
Investors in the round: Foresite Capital, Friedom Partners, Montage Ventures, One Eight Capital, Read Capital, Torch Capital
Industry: Health Care, Health Diagnostics, Medical, Personal Health
Founders: Andrew Olanow, Benjamin Kruger, Dr. Bernard Kruger
Founding year: 2017
Total equity funding raised: $80.4M
AlleyWatch’s exclusive coverage of this round: Sollis Health Raises $33M to Transform Emergency Healthcare with Concierge Model


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



6. Basis $34.0M
Round: Series A
Description: Basis provides AI agents that automate accounting workflows for professionals. Founded by Ryan Serhant in 2023, Basis has now raised a total of $37.6M in total equity funding and is backed by Khosla Ventures, BoxGroup, Daniel Gross, Jeff Dean, and Kyle Vogt.
Investors in the round: Aaron Levie, Adam D’Angelo, Amjad Masad, Avid Ventures, Azeem Azhar, Better Tomorrow Ventures, BoxGroup, Claire Hughes Johnson, Clem Delangue, Daniel Gross, Douwe Kiela, Jack Altman, Jeff Dean, Jeff Wilke, Khosla Ventures, Kyle Vogt, Larry Summers, Lenny Rachitsky, Michele Catasta, Nat Friedman, NFDG Ventures, Noam Brown
Industry: Accounting, Artificial Intelligence (AI), Information Technology
Founders: Ryan Serhant
Founding year: 2023
Total equity funding raised: $37.6M


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



5. Sage $35.0M
Round: Series B
Description: Sage is an operation management system that enhances senior caregiving efficiency. Founded by Ellen Johnston, Matthew Lynch, and Raj Mehra in 2020, Sage has now raised a total of $59.0M in total equity funding and is backed by IVP, Friends & Family Capital, Maveron, ANIMO Ventures, and Goldcrest Capital.
Investors in the round: ANIMO Ventures, Distributed Ventures, Friends & Family Capital, Goldcrest Capital, IVP, Maveron, Plus Capital
Industry: Apps, Social, Software
Founders: Ellen Johnston, Matthew Lynch, Raj Mehra
Founding year: 2020
Total equity funding raised: $59.0M
AlleyWatch’s exclusive coverage of this round: Sage Raises $35M to Modernize Senior Living Operations


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



4. S.MPLE by SERHANT $45.0M
Round: Venture
Description: S.MPLE by SERHANT provides AI-powered professional team of real estate experts. Founded by Ryan Serhant in 2020, S.MPLE by SERHANT has now raised a total of $45.0M in total equity funding and is backed by Camber Creek and Left Lane Capital.
Investors in the round: Camber Creek, Left Lane Capital
Industry: Commercial Real Estate, Real Estate, Real Estate Brokerage
Founders: Ryan Serhant
Founding year: 2020
Total equity funding raised: $45.0M
AlleyWatch’s exclusive coverage of this round: Real Estate Mega Broker Ryan Serhant Raises $45M for S.MPLE to Free Agents From Admin Work


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



3. Precision Neuroscience $102.0M
Round: Series C
Description: Precision Neuroscience is a neural platform that engages in brain-computer interface technology. Founded by Benjamin Rapoport, Demetrios Papageorgiou, Mark Hettick, and Michael Mager in 2021, Precision Neuroscience has now raised a total of $248.0M in total equity funding and is backed by General Equity Holdings, Alumni Ventures, B Capital, Draper Associates, and Duquesne Family Office.
Investors in the round: B Capital, Duquesne Family Office, General Equity Holdings, Steadview Capital
Industry: Medical, Medical Device, Neuroscience, Product Research
Founders: Benjamin Rapoport, Demetrios Papageorgiou, Mark Hettick, Michael Mager
Founding year: 2021
Total equity funding raised: $248.0M


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



2. Public $105.0M
Round: Series D
Description: Public is a fractional investing platform that allows members to build diverse portfolios, including stocks, ETFs, crypto, and NFTs. Founded by Jannick Malling, Leif Abraham, Matt Kennedy, Peter Quinn, and Sean Hendelman in 2019, Public has now raised a total of $413.5M in total equity funding and is backed by Accel, Bossa Invest, Scott Belsky, Inspired Capital Partners, and Lakestar.
Investors in the round: Accel
Industry: Cryptocurrency, FinTech, Stock Exchanges, Trading Platform
Founders: Jannick Malling, Leif Abraham, Matt Kennedy, Peter Quinn, Sean Hendelman
Founding year: 2019
Total equity funding raised: $413.5M


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.



1. Cleerly $106.0M
Round: Series C
Description: Cleerly is a digital healthcare company that offers heart disease diagnosis solutions. Founded by James K. Min in 2017, Cleerly has now raised a total of $386.5M in total equity funding and is backed by Novartis, Fidelity, Insight Partners, T. Rowe Price, and Sands Capital Ventures.
Investors in the round: Battery Ventures, Insight Partners
Industry: Apps, Artificial Intelligence (AI), Health Care, Medical, Wellness
Founders: James K. Min
Founding year: 2017
Total equity funding raised: $386.5M


The AlleyWatch audience is driving progress and innovation on a global scale. With its regional media properties, AlleyWatch serves as the highway for technology and entrepreneurship. There are a number of options to reach this audience of the world’s most innovative organizations and startups at scale including developing prominent brand placement, driving demand generation, and building thought leadership among the vast majority of key decision-makers in the New York business community and beyond. Learn more about advertising to NYC Tech at scale.


Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to Go from Copy and Paste Deployments to Full GitOps

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

InnerSource helped reduce the amount of development work involved when introducing GitOps by sharing company-specific logic, Jemma Hussein Allen said at QCon London. In her talk, she showed how they went from copy and paste deployments to full GitOps. She mentioned that a psychologically safe environment is really important for open and honest discussions that can help resolve pain points and drive innovation.

Their version control tool at the time was Subversion, which was moved to a more popular distributed version control system – Github, the Git based developer platform.

When they started the implementation of GitOps, their servers were running a LAMP stack (Linux, Apache, MySQL and PHP). This was a standard software stack for PHP web applications, Hussein Allen mentioned, which they didn’t want to change as the application was running well and the focus was on deployment automation.

The CI/CD tool of choice was Jenkins because of the flexibility in pipeline building block configuration and the large number of plugin integrations with other tools that were available, Hussein Allen said. Puppet was used for configuration management as it was already implemented for the more recent deployments and worked well, so the decision was made to continue with it, she mentioned.

Hussein Allen mentioned the four GitOps principles:

  • Declarative – the desired system state needs to be defined declaratively
  • Versioned and Immutable – the desired system state needs to be stored in a way that is immutable and versioned
  • Automatic pulls – the desired system state is automatically pulled from source without manual interventions
  • Continuous reconciliation – the system state is continuously monitored and reconciled

The principles they found the most important were a declarative system state definition, solid versioning and continuous reconciliation, because of the benefits they brought in terms of faster development and deployments, Hussein Allen said.

The removal of all manual interventions is one that they used more cautiously, Hussein Allen said, as it required a fully comprehensive set of automated tests, high availability, a very mature monitoring solution, and compatibility with team ways of working.

Developers customised their workspaces with Docker, as Hussein Allen explained:

We set up an image registry where developers could download base images and use these as building blocks to develop and test new features locally before integrating them into the multi-environment test and deployment workflow.

Before the changes, developers would change code locally and deploy to the development environment to test, Hussein Allen said. The challenge with this testing method came when multiple developers needed to test different changes in the development environment at the same time, she explained:

When the development environment included multiple changes, it didn’t allow for testing in isolation and didn’t provide reliable results. The image registry meant that developers could test their own changes locally, against the code running in production, in isolation before integrating the change into the main development testing pool.

A common theme that emerged from developer feedback on GitOps was the need for building blocks and a “quick start” guide to help adopt more quickly. Introducing an InnerSource capability encouraged developers to create and contribute to these building blocks and boilerplates, Hussein Allen said. The increased contribution to shared resources positively and directly impacted the speed that developers could adopt new tooling.

As developer requirements evolve, creating an open dialogue within a psychologically safe environment is invaluable for understanding the evolving needs of developers, Hussein Allen said; establishing regular connections and solid offline communication channels, ensuring developers stay up-to-date with the platform roadmap and any alterations that could impact their work. These channels also provide valuable opportunities for developer feedback and suggestions, which can be integrated into the platform strategy, she concluded.

InfoQ interviewed Jemma Hussein Allen about adopting GitOps.

InfoQ: What did you do to help developers with the transition from the old way of working?

Jemma Hussein Allen: We spent time training or pair programming with developers. Some teams took longer to transition to the newer way of working, mainly due to heavy workloads, as it took time for developers to familiarise themselves with the new process and become as efficient using it as they were with the old way of working.

Working with team product owners and stakeholders to show the benefits of the new process helped to give developers the bandwidth to learn and adopt the new tooling into their daily work.

InfoQ: What’s your approach to knowing the needs of the developers and keeping the dialogue going?

Hussein Allen: What we found helpful in understanding developer needs was providing the opportunity for platform engineers and developers to work more closely together. In organisations with a centralised platform team structure, initiatives such as “Walk a day in my shoes” where platform engineers are embedded into product teams for a short time and vice versa can be really valuable to get an understanding of any pain points or improvements that can be made to the platform.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.