Gemma 3n Introduces Novel Techniques for Enhanced Mobile AI Inference

MMS Founder
MMS Sergio De Simone

Launched in early preview last May, Gemma 3n is now officially available. It targets mobile-first, on-device AI applications, using new techniques designed to increase efficiency and improve performance, such as per-layer embeddings and transformer nesting.

Gemma 3n uses Per-Layer Embeddings (PLE) to reduce the RAM required to run a model while maintaining the same number of total parameters. The technique consists of loading only the core transformer weights into accelerated memory, typically VRAM, while the rest of the parameters are kept on the CPU. Specifically, the 5-billion-parameter variant of the model only requires 2 billion parameters to be loaded into the accelerator; for the 8-billion variant, it’s 4 billion.

Another novel technique is MatFormer (short for Matryoshka Transformer), which allows transformers to be nested so that a larger model, e.g. with 4B parameters, contains a smaller version of itself, e.g. with only 2B parameters. This approach enables what Google calls elastic inference and allows developers to choose either the full model or its faster but fully-functional sub-model. MatFormer also support a Mix-n-Match method to let developers create intermediate-sizes versions:

This technique allows you to precisely slice the E4B model’s parameters, primarily by adjusting the feed forward network hidden dimension per layer (from 8192 to 16384) and selectively skipping some layers.

In the future, Gemma 3n will fully support elastic inference, enabling dynamic switching between the full model and the sub-model on the fly, depending on the current task and device load.

Another new feature in Gemma 3n aimed at accelerating inference is KV cache sharing, which is designed to accelerate time-to-first-token, a key metric for streaming response applications. Using this technique, which according to Google is particularly efficient with long contexts:

The keys and values of the middle layer from local and global attention are directly shared with all the top layers, delivering a notable 2x improvement on prefill performance compared to Gemma 3 4B.

Gemma 3n also brings native multimodal capabilities, thanks to its audio and video encoders. On the audio front, it enables on-device automatic speech recognition and speech translation.

The encoder generates a token for every 160ms of audio (about 6 tokens per second), which are then integrated as input to the language model, providing a granular representation of the sound context.

Google says they have observed strong results translating between English and Spanish, French, Italian, and Portuguese. While Gemma 3n audio encoder can process arbitrarily long audios thanks to its streaming architecture, it will initially be limited to clips of up to 30 seconds at launch.

As a final note about Gemma 3n, it is worth highlighting that it supports resolutions of 256×256, 512×512, and 768×768 pixels and can process up to 60 frames per second on a Google Pixel device. In comparison with Gemma 3, it delivers a 13x speedup with quantization (6.5x without) and has a memory footprint that is four times smaller.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Janney Montgomery Scott LLC Buys Shares of 2612 MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Janney Montgomery Scott LLC bought a new position in MongoDB, Inc. (NASDAQ:MDBFree Report) during the 1st quarter, according to its most recent disclosure with the Securities and Exchange Commission. The institutional investor bought 2,612 shares of the company’s stock, valued at approximately $458,000.

A number of other hedge funds and other institutional investors also recently made changes to their positions in MDB. Vanguard Group Inc. boosted its stake in shares of MongoDB by 0.3% during the 4th quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after acquiring an additional 23,942 shares in the last quarter. Franklin Resources Inc. lifted its holdings in MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after purchasing an additional 181,962 shares during the last quarter. Geode Capital Management LLC boosted its position in MongoDB by 1.8% during the fourth quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock worth $290,987,000 after purchasing an additional 22,106 shares during the period. First Trust Advisors LP grew its holdings in MongoDB by 12.6% during the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after purchasing an additional 95,893 shares during the last quarter. Finally, Norges Bank bought a new position in shares of MongoDB in the fourth quarter valued at approximately $189,584,000. 89.29% of the stock is owned by hedge funds and other institutional investors.

Wall Street Analysts Forecast Growth

Several research firms recently issued reports on MDB. Macquarie reaffirmed a “neutral” rating and set a $230.00 price objective (up from $215.00) on shares of MongoDB in a report on Friday, June 6th. DA Davidson restated a “buy” rating and set a $275.00 price target on shares of MongoDB in a report on Thursday, June 5th. UBS Group raised their price target on MongoDB from $213.00 to $240.00 and gave the company a “neutral” rating in a research report on Thursday, June 5th. Royal Bank Of Canada reissued an “outperform” rating and set a $320.00 price objective on shares of MongoDB in a research report on Thursday, June 5th. Finally, Oppenheimer decreased their target price on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Eight investment analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has given a strong buy rating to the stock. According to MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and a consensus target price of $282.47.

<!—->

Get Our Latest Stock Analysis on MongoDB

Insider Activity at MongoDB

In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total transaction of $236,067.92. Following the completion of the sale, the director owned 21,096 shares of the company’s stock, valued at approximately $4,241,983.68. The trade was a 5.27% decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. Also, CEO Dev Ittycheria sold 25,005 shares of the firm’s stock in a transaction on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $5,851,170.00. Following the completion of the transaction, the chief executive officer owned 256,974 shares in the company, valued at $60,131,916. This represents a 8.87% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 28,999 shares of company stock worth $6,728,127 over the last ninety days. Company insiders own 3.10% of the company’s stock.

MongoDB Stock Performance

NASDAQ MDB opened at $211.05 on Friday. The stock has a market capitalization of $17.24 billion, a P/E ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 52 week low of $140.78 and a 52 week high of $370.00. The stock has a fifty day moving average of $194.66 and a 200-day moving average of $215.72.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter last year, the business earned $0.51 earnings per share. The firm’s quarterly revenue was up 21.8% compared to the same quarter last year. Equities research analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

3 of Wall Street’s Favorite Stocks Facing Headwinds – StockStory

MMS Founder
MMS RSS

<br /> 3 of Wall Street’s Favorite Stocks Facing Headwinds<br />

*]:w-full [&>*]:flex [&>*]:flex-col [&>*]:grow”>

Cover image

MDB (©StockStory)


Max Juang /

2025/07/04 12:33 am EDT


Wall Street has set ambitious price targets for the stocks in this article.
While this suggests attractive upside potential, it’s important to remain skeptical because analysts face institutional pressures that can sometimes lead to overly optimistic forecasts.

Unlike the investment banks, we created StockStory to provide independent analysis that helps you determine which companies are truly worth following. That said, here are three stocks where Wall Street’s estimates seem disconnected from reality and some better opportunities to consider.

MongoDB (MDB)

Consensus Price Target: $265.99 (25.9% implied return)

Started in 2007 by the team behind Google’s ad platform, DoubleClick, MongoDB offers database-as-a-service that helps companies store large volumes of semi-structured data.

Why Do We Think Twice About MDB?

  1. Track record of operating margin losses stem from its decision to pursue growth instead of profits
  2. Poor free cash flow margin of 7.6% for the last year limits its freedom to invest in growth initiatives, execute share buybacks, or pay dividends

MongoDB is trading at $211.19 per share, or 7.2x forward price-to-sales. Dive into our free research report to see why there are better opportunities than MDB.

Upland (UPLD)

Consensus Price Target: $4.25 (108% implied return)

Founder Jack McDonald’s second software rollup, Upland Software (NASDAQ:UPLD) is a one stop shop for sales and marketing software, project management, HR, and contact center services for small and medium sized businesses.

Why Do We Think UPLD Will Underperform?

  1. Annual sales declines of 4.4% for the past three years show its products and services struggled to connect with the market
  2. Sales are expected to decline once again over the next 12 months as it continues working through a challenging demand environment
  3. Competitive market means the company must spend more on sales and marketing to stand out even if the return on investment is low

Upland’s stock price of $2.04 implies a valuation ratio of 0.3x forward price-to-sales. Check out our free in-depth research report to learn more about why UPLD doesn’t pass our bar.

Harley-Davidson (HOG)

Consensus Price Target: $28.92 (13.3% implied return)

Founded in 1903, Harley-Davidson (NYSE:HOG) is an American motorcycle manufacturer known for its heavyweight motorcycles designed for cruising on highways.

Why Do We Steer Clear of HOG?

  1. Number of motorcycles sold has disappointed over the past two years, indicating weak demand for its offerings
  2. Shrinking returns on capital suggest that increasing competition is eating into the company’s profitability
  3. 13× net-debt-to-EBITDA ratio shows it’s overleveraged and increases the probability of shareholder dilution if things turn unexpectedly

At $25.53 per share, Harley-Davidson trades at 7.7x forward P/E. To fully understand why you should be careful with HOG, check out our full research report (it’s free).

High-Quality Stocks for All Market Conditions

Market indices reached historic highs following Donald Trump’s presidential victory in November 2024, but the outlook for 2025 is clouded by new trade policies that could impact business confidence and growth.

While this has caused many investors to adopt a “fearful” wait-and-see approach, we’re leaning into our best ideas that can grow regardless of the political or macroeconomic climate.
Take advantage of Mr. Market by checking out our Top 5 Strong Momentum Stocks for this week. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025).

Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-micro-cap company Kadant (+351% five-year return). Find your next big winner with StockStory today for free. Find your next big winner with StockStory today. Find your next big winner with StockStory today

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

From Relational to NoSQL: A Guide to Migrating Your Application to Amazon DynamoDB

MMS Founder
MMS RSS

Organizations migrating from relational databases to Amazon DynamoDB face challenges in redesigning data models and access patterns. Understanding the fundamental differences in query capabilities, data modeling approaches, and application architecture is essential for optimal performance and scalability. A real-world example illustrates the challenges of migrating a social media platform experiencing rapid growth, with complex queries and scalability concerns. The article is the first part of a series exploring how to effectively migrate from SQL to DynamoDB, focusing on analyzing existing database structures and access patterns to prepare for migration.

From Relational to NoSQL: A Guide to Migrating Your Application to Amazon DynamoDB

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Podcast: Trust-first Leadership and Building Great Teams

MMS Founder
MMS Natan Zabkar Nordberg

Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I’m sitting down at opposite sides of the world with Natan Žabkar Nordberg. Natan, welcome. Thank you for taking the time to talk to us.

Natan Žabkar Nordberg: Welcome to you, too. Thank you, Shane. It’s great to be here.

Shane Hastie: I am outside Wellington in New Zealand and you’re in Edinburgh, Scotland. So we are as far apart as it’s possible to be, but joined by the miracles of technology. My normal starting point in these conversations is who’s Natan?

Introductions [01:07]

Natan Žabkar Nordberg: Well, that’s a question that cuts a little bit deeper than the usual what do you do? I’ll cheekily answer that part first, actually. So I’m an engineering manager, which doesn’t really tell you much since it can mean very different things for different companies. But the important part for me is that I absolutely love all the main parts of my job and the way I see them, there’s three of them. There’s the technical part, which can either be low-level technical work, writing code, things like that.

Or more high-level, which is architectural discussions, talking about the technical problems in a general sense, things like that. Then there’s the organizational side of things, which is leading teams, organizing people, turning a little bit of that chaos into order. And then there’s the people side of things with which I mean both your usual line management, but also the more personal management.

So for instance, I don’t believe that when I’m a manager, I’m just a manager for the company. My job is not to just get the most out of a person for the company or anything like that. My job is also to help support the person themselves and perhaps the best way to support them is to tell them that they should be looking for something else, not my company specifically. And I feel that that last part is definitely to me, the most important part of my job, the people part. Everything else I kind of do at home or in some other way. But the people part is very different. It’s never the same. It is very different with every single person. It is super interesting and it is not something you can do at home. But the fact that I like all those parts of my job means that over the years I shifted to startups because there I get to wear multiple hats.

But your question was, who am I? Well, I have a bit of a non-traditional background. As you can probably hear, I’m not originally from Scotland. I was actually born and raised on a farm in Slovenia, which is a small country in between Italy, Austria, Hungary, and Croatia, about 2 million people. And even though I’ve always loved tech, I also always knew I wanted to do something related to people.

So even though I was in the evenings playing around with some code and learning how to code a little bit at home, I actually studied theoretical mathematics. And then I’ve been a tutor, I’ve been a teacher, I worked on the stage and off the stage in theatre, and I’ve done a bunch of other things. But in the end, kind of was expected, the tech part prevailed and I started properly getting into that. And when I had a chance, I then moved to management to get me back into that people aspect of things.

So I guess my life went through a couple of twists and turns, especially when they also met my wife here who is Swedish and we start a family, and I definitely don’t think my 20-year-old self could have predicted where I’ll end up. But at the end of the day, not that much has changed. I feel I see myself as a person who hopes to make life a little bit better for the people around him in whatever small way I can. And I think that part has actually stayed the same since I was a lot younger than I am now.

Shane Hastie: This is the Engineering Culture podcast where we go deep into the people stuff. Let’s start with culture. What is good culture in an engineering context?

Aspects of good culture [04:12]

Natan Žabkar Nordberg: I think that’s a very interesting question because I don’t believe it has a singular answer. I think good culture is the culture that works for you specifically, and I think that what works for you might be very, very different and what works for me. I think what works for you might also be different with what works for you in three months’ time, because we change. So I think the right question to answer is maybe not even good, but what is the right culture for you at this point in time and what might be the right culture for you in the future? I became a dad about a year ago, almost exactly a year ago now, and obviously what was best for me changed at that time.

I started prioritizing more personal life. I started prioritizing more security, things like that. But I think there are probably a couple of core questions you always need to ask yourself, which is questions like, do you want an environment where you’re pushed and everybody tries really hard? Do you want an environment where people go to work and work is just work? Do you want an environment where people socialize and they become fans? Do you want an environment where people leave you alone? These are the kind of questions you should ask yourself to identify what’s good for you. So I can talk a bit about what’s good for me if you want to, but I don’t think there’s an answer to what is just good in general.

Shane Hastie: Cool. So what’s good for you?

Natan Žabkar Nordberg: So I think there’s a couple of things that are really important to me culture-wise. I think, in no particular order, I want to be at a place where I can feel like people are people and I don’t limit that just to the employees of the company. I want a place that treats, be it the employer, be it the employee, be it your coworkers, or yourself, be it people who are your clients or users or whatever, or be it just the random person that comes into deliver mail. I want to have a company that treats them as people.

Now that could mean just being nice to them because we’re all humans and it’s nice to be nice to each other, but it also means understanding that they come with their own, let’s call them needs and wants. The example I tend to use, so I tend to work remotely for the past couple of years, so I’ve been working fully remote for the past three years or so, and one of the common questions I get from my direct reports is questions like, can I go to the gym, or can I go to the store in the middle of the workday?

And I used to answer that by saying, “You do what you want to do, you’re an adult, I don’t really care”. And then I realize it’s actually a really bad answer because it’s not true. I do care. I care that they get to do what makes their lives better, if it doesn’t hurt me or the company especially. And I care about them doing what makes them more effective, I always feel like you see things both as a person and as an employee of the company that tries to make people effective when you’re a manager. And this helps both cases, which is great. And I always tell people, “As long as it works for the people you work with, please organize your day the way you see fit because it’s better for you”. So that’s what I mean with seeing people as people.

Then there are a couple of other things. Again, in no particular order, I spend a lot of time at work. It’s important to me that I can be myself. For me, that means that I can be a little bit silly. I don’t need to make great friends, but I want to be at least friendly with the people I work with and that I don’t have to play… We all play a role sometimes, but that I don’t have to constantly play a role, that I can have an honest conversation with somebody, that I can be who I am, that I can talk about my hobbies. Those things are important to me. There’s a couple of other parts I guess when it comes to work, I do not want to do things that actively make the world the worst place. I’ll let you all decide what that means, but I would not want to work for a company that I believe is actively making the world a worse place.

I will admit I’ll not go as far anymore to say I really want to make the world a better place. It’s a little bit naive and I think that usually we are making the world a better place in some small way, be it by helping somebody get a better device. You are booking a hotel better, or traveling easier, or whatever. Those are all making the world a better place in a small way. But I think the slightly childish idea of we’re all saving the world has sadly been hit with a dose of realism through the years. So these are the things I care about.

Shane Hastie: In your role as a manager, as a leader of teams, how do you help your folks take that ownership for themselves?

Building trust and ownership [08:33]

Natan Žabkar Nordberg: Again, a very good question. I think that there are a couple of different things you need for people to be able to take clear ownership of things. It’s not a super simple answer. I would say it probably starts with trust. I’m going to have a slightly controversial statement coming here, which is that I don’t believe that trust can be earned unless it’s given first. It is very difficult for somebody to prove that you can trust them unless you give them an opportunity to prove that you can trust them. And I think especially in remote environments, again, that becomes even more difficult because you lack some of these day-to-day interactions. So what I think is a very effective way of doing this, and this is also something I experienced myself from my own manager, is that you actively give your trust to people.

Say, “I will trust you with this”. You discuss what does that actually mean and then you actually trust that person, as easy as it sounds to say that. But what I mean by that is you don’t then micromanage. You don’t look over their shoulder the whole time. You don’t step in and say, “Oh, I can help you. We’re going to do this better together”. No, no, you actually let them do their own thing. You support them obviously, but you have maybe a weekly session where you say, “What can I help you with?” And stuff like that. Or maybe you agree on limitations on discuss things like, are you responsible, are you accountable for it? Am I going to have your back or am I going to completely remove myself from the organization to ensure that I can really confidently say, if somebody comes in and says, did they do a good job?

I can be like, well, you’ve seen the job that was done and I was not there even once I was in zero meetings and it’s both helpful for them because I can really prove that they have done the work and it’s also a little bit scary for them because it means if something goes wrong, I’m not there to support them. So this trust is a bit of a two-way street. It’s an agreement between people, but I think it is an agreement. You tell them I’m going to trust you. And I think that with trust people suddenly become a lot more autonomous. And if they are autonomous, there’s still an interesting problem you might hit, which is that we often think about autonomy as we will give you nothing, just go and solve a problem. But I don’t think that’s the best way of doing it because, so I believe that we as humans don’t tend to work well with no limitations.

Telling somebody, completely greenfield, you can do whatever you want. It’s never really true, because we all have our own limitations on our product, on our expectations. If I tell somebody something like this and they say, oh, I’m going to take me 27 years to build this, of course I’m going to say no. So what I try and do instead is guided autonomy, which is not a term that I have coined or whatever. It’s a quite common term and it really just means putting some limitations and putting some boundaries around that person. I am a mathematician by trade, so I almost see this as vectors and as long as the vector is pointing in the right general direction, they get to guide it exactly how it goes and where it goes.

But I need to point in the general direction. So I think with those two things, with a combination of trust and autonomy, people will feel empowered to take the initiative they need, and they will be able to actually take proper ownership. Because to me, delegating something with a bunch of strings attached and treating somebody like a second pair of hands doing exactly what you do, that’s not somebody owning it, that’s just somebody executing on the task you could have executed on. That’s not where we get the value.

Shane Hastie: What if the person doesn’t have the fundamental skill to do the role? How do help them build the competency, build the skill?

Building skills and competency [11:56]

Natan Žabkar Nordberg: I guess the assumption here is that they want to actually build that skill and that role. Another thing about me is that I believe that while certain progressions are quite natural, let’s say a junior developer to a mid-level developer, that’s a very natural progression, you’re just going to do the same thing, but be better. There’s definitely a part you reach in your career where progression becomes a very conscious choice. I will change what I’m doing, I’ll do it in a different way. I will do… Something like that. So assuming we’re talking about a situation where a person wants to change or wants to do this role instead of feeling like they have to for whatever reason, then I think you just work with them, you talk to them, you say, “Let’s talk a little bit about what you’re doing right now and where you need to be to be able to achieve what this role requires”.

And this role could either be a big role like a senior engineer or principal engineer, it could be a smaller role like leading this project. I think that depending on the person, it could be quite useful to do a bit of a gap analysis if they’re more of a rational approach person and then analyse all the gaps and say, “Hey, here are the things that you’re doing”, because gaps go both ways. “Here’s the things you’re actually doing better than expected, and here’s the things where you might need to improve a little bit”. And then with that, hopefully you have enough guidance to talk about how do we improve things. And I think that’s where it goes back to a similar thing I talked about in trust. It’s a two-way conversation. Some people work best by just being thrust into it and they have to deal with it and they’re stressed and they feel in quotes, terrible, but actually enjoy it.

That’s how they grow. Some people really want a massive safety net and they say, “I feel safest and best if I grow really slowly with a lot of support. I wanted to be here”. So it really depends on each specific person how I want to deal with it. But I think that assuming they are not completely new to the business or to their career, they will have a decent understanding of how they work best. And I think your job as a manager is to support them in what they do best, not force your way of working and thinking upon them.

Shane Hastie: The primary unit of value delivery today in most organizations now is the team. What makes a good team?

What makes a good team [14:00]

Natan Žabkar Nordberg: What business are you? So what I mean is, are you the business that cares primarily about your product and deliver that product? Or your business cares about the long-term, let’s call it team health and team quality. As in, I feel like I’ve talked to especially startups where it’s all about the product and then talked to startups that are like, it’s all about the team. If this team pivots to something else, we’ll still be happy, we’ll still be effective. And some people are like, no, it’s the product. The team can change five times, the product matters. Do you want to guide that question more or?

Shane Hastie: Well, I have a bias towards the long-lived stable team. I did write the book #noprojects.

Natan Žabkar Nordberg: Yes. I assumed you would say that. Yes, I would say that if your focus is on the team, then it is really important to build a culture, and I mean a team culture, not just company culture where people can bring their best selves. That means anything from just encouraging people. On the positive side, it means things like encouraging people. It means things like helps working towards their strengths, helping shore up their weaknesses, stuff like this. On the negative side, it also means managing any conflicts that appear very thoroughly. I think in a team that wants to be long-lived and wants to work together closely, there is no option to say, oh, these two people don’t work together well, we’ll just put them on side projects. They’ll just not interact with each other. While in a bigger team that cares more about the product delivery, you could do something like that.

So I think you have to be very conscious of ensuring people go through enough shared experiences that they become a cohesive unit, they understand each other’s communication patterns, they understand what drives each other, and they are willing to put other people first before themselves. Because it turns out if everybody does that, everybody benefits. But if you then have one person who says, no, no, no, I’m going to put myself first while everybody else is putting each other first, that can cause a massive conflict in the team. So I think you have to be a lot more careful about managing, let’s call them the outliers in this team, when it comes to the cultural side.

And then what I would say is you have an effective team. They can work on anything, really. If you get people to work together, well then any problem you throw at them, I think nowadays. There are exceptions. There are certain companies that work on the deep detailed, really hardcore tech. That’s different, but most of us work on products that we’re not inventing completely new things. We just need people who are decent engineers, who are good at working together, who are good at the product side of things, but the tech is comparatively easy in nature. Does that answer your question?

Shane Hastie: It starts to. What about the diversity in that team? I know that you’ve got some particular perspectives on what is diversity, so let’s dig into that.

Diversity and different perspectives [16:52]

Natan Žabkar Nordberg: Yes, so I would say I care a lot about what is marked as diversity and inclusion, but when we talk about diversity, I want to go a little bit beyond what is often marked as diversity in the media or in the reports you see. So what I mean by that is not that I do not care about the fact that people are from different countries or different genders or different skin colours and things like that. I do care about it because it affects your lived experiences. But what I really care about is those lived experiences.

Two people could look almost exactly the same, but they grew up in a completely different environment. They’ll have a very different way of thinking, they’ll have a very different way of learning, they’ll have a very different way of communicating. And two people could look massively different just when you just look at them, but it could have grown in such a similar environment that maybe they’re actually closer than to people who look very similar.

And I think that you get a lot of power and a lot of strength from this diversity. Very silly example, I mentioned before I’m Slovenian, one of the things when I started coding, I used to work on like CMS systems, and I used to use some very basic options for doing translations. And a lot of those options only supported singular and dual. So you could choose whatever, Bob has one apple, Bob has two apples. But turns out that certain languages have more than that. Slovenian has singular, has dual, has plural, and then has a second plural in a way. So instead of just having, let’s call it an if statement for single and for plural, I needed multiple statements for singular, for dual, for plural, and for a second plural. And that’s something you wouldn’t really know unless somebody tells you about it, and somebody probably wouldn’t know about it unless they lived it.

So I’ve worked in teams that had people from various backgrounds coming from classical software engineer training to having a masters in Greek mythology and then switching to software engineering. And everybody brought a very different way of thinking about things, how they would solve problems, how to approach problems, and how we think about our product. So I guess that’s the other part of the question. What makes the team effective is I think that the team has to be built in a way that they can actually handle the problems. And part of that is also diversity, be it diversity of your lived experiences, diversity of thinking, but also diversity of skill sets. You might want a designer, you might want a product person, you might want an engineer front end engineer, back end engineer.

Shane Hastie: Those very different perspectives come often with very different assumptions. How do we get the best and avoid the conflict or do we actually want some conflict?

Keeping conflict healthy [19:30]

Natan Žabkar Nordberg: Yes, so I think what you’re getting at is the fact that I talked earlier about needing the team to work well together and turns out people that are similar find it easier to work together well because they just naturally do similar things. And on the flip side, people who are diverse, tend to naturally have a few more conflicts, be it good conflicts, as in there’s the conflicts we want. The conflicts where we say, “Hey, I disagree with you because my experience tells me this is better and we can work together”. It’s not a personal conflict, it’s a product conflict, let’s call it, where we all know we’re on the same side, but we are disagreeing on how to get there. But it also does lead to personal conflicts. Communication patterns are a big one. Somebody might not appreciate somebody being very direct while somebody else might not appreciate that other person being very non-direct.

And they get frustrated because they’re like, just tell me what you want directly. That’s a very common pattern I see as somebody who moved to the UK where people tend to be less direct and then also worked with a lot of people more from the eastern side of the world, which tend to be on average more direct and seeing the difference between people and the frustrations that build. Sometimes when people thought they were saying the same thing, well, actually they were saying the same thing, but they were saying it in such a different way, they did not realize they were saying the same thing.

So your question was, how do we handle some of those conflicts? I think the main thing is needing to understand each other. And there’s a funny thing about people when if you ask them things they usually answer. So if you just ask them, what did you mean by that? They tend to be able to answer, say, “Hey, now there’s just the two of us, instead of 10 other people listening in, can you just tell me directly what you meant? Because I’m afraid I misunderstood it”. There’s actually a concept that I quite like, which is, well, at least I came across in it in Dungeons and Dragons, which I don’t know if you’ve played it or not.

Shane Hastie: Personally I haven’t, but I’ve been around many D&D players.

Ideas from role-playing games – session zero [21:23]

Natan Žabkar Nordberg: For any listeners who have not played it, it’s called a role-playing tabletop game where it’s very different in other games in the sense that there is no real board or world or things you do, right? It’s not like playing, I don’t know, Settlers of Catan where you just do this thing. The whole world is created, by one of the players called the DM, the Dungeon Master, and they create everything and then the other players get to play in it, like a sandbox. And the really nice thing about it is that you can create everything, that’s awesome. And the really bad thing about it is that you can create everything, and that’s terrible unless you know what your players want. So in D&D, what people started doing is what they call a session zero. So instead of the first session, you have a session zero where you talk about what do we want to get out of this game?

Do we want to be really strictly following the rules because we really care about the rules? Or do we just want to say the rule of cool, if it’s cool, we get to do. Hey, we’re just going to go around and fight everybody. Or actually I want to say I’m really interested in political intrigue, so let’s create a world that talks about that, because it’s your world, you do whatever you want. I think a similar concept, not that it’s your world, do what you want, but the idea of let’s talk about how we do things can be applied to work. So I just call it a session zero myself, and it’s a session that I tend to have with my direct reports or also with my co-workers where we talk about how do we work together. And there’s lots of things you can discuss there.

There’s communication patterns. I can say things like, “Hey, I tend to be quite direct, especially in one-on-one settings, so if you prefer me not to be, let me know and I’ll try and be less direct. But please keep in mind that if I am direct, I will never, ever try and be rude. It’s just my natural inclination”. I can ask people, “Do you prefer to be praised publicly or privately?” Because I think some people really appreciate that public praise and some people really shy away from it.

I can also tell people again, my own limitations and say, “Look, I am trying to be better at this, but my natural inclination is that if things are going well, I don’t say anything. I don’t know why, it’s just how I am, and I’m trying to be better at that”. But that means that if they know that, they can assume that if I’m not saying anything, they’re doing a good job and I can tell them, “I promise you, if I see something that makes me think you might not be meeting expectations, I will talk to you immediately. I will not wait for next week or next month. I will talk to you now”.

And I think setting this set of patterns, rules, whatever you want to say, you can follow in your conversations means that you get to understand each other a lot better. So going back to the team, you might not want to do it to the same level of detail, but there are team norms we can usually discuss. There are team communication patterns we can discuss, figure out what people care about. Oftentimes those are ways that are not part of work. I feel you learn more about people in these ways by going up and having lunch together or talking about, I don’t know, the shows they enjoy and stuff like this. Because people will learn about thinking patterns and communication patterns that way, they’ll learn about how people joke, they’ll learn about their humour, and that will make them realize, oh, when they said that thing, they were actually just joking around.

They didn’t mean it seriously, right? Well, actually when they said that thing, they were definitely serious. I should take it seriously too. I should not joke back. So to try and bring it together, if you figure out a way to get people to understand each other, I think you can work around a lot of these conflicts that you don’t want while at the same time encouraging, in quotes, conflicts you do want. These different ideas because they’re not supposed to be conflicts. They’re different ideas of how to solve the problem we have. And if everybody knows we’re on the same side, we’re trying to solve the same problem, we’re working together, and we appreciate each other’s different opinions, then I think you have a very strong team coming together instead of a team that hates each other.

Shane Hastie: I know that you have a background in improv, and you mentioned that earlier as well. What have you brought from the world of improv into leading and working in teams?

Lessons from improv theatre [25:14]

Natan Žabkar Nordberg: Probably two things, maybe three things. One is, as sad as it sounds, fake it till you make it is a thing. And turns out that sometimes people need the illusion of confidence more than actual confidence. So that’s something that I definitely learned in improv. You go on stage, you don’t know what you’re doing, but you need to start doing something, and need to be committed to that. But what you also learn there is that you have a team of people who support you. They’re on your side.

If they see you go on stage and start fixing a car, they’ll make a whole scene around you about how fixing a car is really important right now. And I think the same thing can happened at work, a team of people you trust, you say, “Hey, I’m not quite sure what we need to do, but let’s commit to this specific part. Let’s try our best to do this. Let’s not let people question everything 10 times, but let’s commit to this thing and then let’s pivot if we need to”.

So fake it till you make it a bit in leadership positions can be important because people look up to you to understand what you’re doing. Or to put it a little bit more nicely, one of things I enjoy trying to do is turning chaos into order. So what you usually get is the chaos of the outside world coming in saying, “There’s all these things we could do”. And then you try and create the, here are the three things we’re doing, here’s the 100 things we’re not doing. So team, let’s focus on these three things. Even if you’re not actually confident those three are the best choice, they don’t have to be the best choice, they just have to be a good choice because doing three good things is better than not doing any perfect things.

So I think that’s one thing. The other part is going back to what I mentioned about guided autonomy before. There’s this very interesting thing that happens when you put a person on stage. If you tell them you could do anything you want, I would say like 90% of people freeze and they do nothing. But then when you add a lot of really silly limitations, you tell them you can only speak in rhyme or you have to say exactly seven words per sentence, or you have to hop on one leg while you talk or whatever. You put the silliest limitations on them and suddenly people are so preoccupied thinking about how they can solve the problem within these limitations, they actually start doing something and suddenly this whole scene starts about people hopping on one leg and talking in rhyme and about, I don’t know, they drank a weird potion that made that happen.

Now they need to find a wizard to help them out or whatever because they have these things added to them. So again, in work there is, hopefully, slightly less silly limitations, but the limitations can help us. I think those will be the main things. There’s definitely a bit of a benefit in being comfortable, let’s call it on stage with people’s focus on you. I used to be extremely uncomfortable on stage. I would not go up there, never, ever, ever. And true learning that, yes, a lot of people are uncomfortable with it, but they still do it. You learn that let’s just go with it and that it’s not that bad in the end. And as any skill, you just learn to deal with it.

Shane Hastie: Natan, we’ve covered a lot of ground, really interesting stuff. If somebody wants to continue the conversation, where would they find you?

Natan Žabkar Nordberg: I am actually not very good at being on any social media. I don’t know if I would say I’m a private person or just, I never put any time and effort into it. So I guess I could say you can find me on LinkedIn and I think my name is unique enough that you will not find anybody else with that name.

Shane Hastie: And again, thank you so much for taking the time to talk to us today.

Natan Žabkar Nordberg: Thank you very much, Shane. It was a pleasure.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Navigating Complexity, from AI Strategy to Resilient Architecture: InfoQ Dev Summit Munich 2025

MMS Founder
MMS Artenisa Chatziou

Here at InfoQ, we spend our days tracking the patterns and practices that senior software professionals are using to solve their toughest challenges. Currently, three key themes dominate our conversations with tech leaders across Europe: the immense pressure to integrate AI responsibly, the critical need to build secure and resilient systems, and the challenge of navigating an increasingly complex regulatory landscape.

At this year’s InfoQ Dev Summit Munich, taking place on 15–16 October, we’ll explore these pressures. We’ve curated a program that moves beyond the hype to focus on the peer-led, actionable insights you need to lead with confidence.

As always, we’ve built this conference on the core InfoQ principle: practical, unbiased content from practitioners in the trenches. No hidden product pitches. No marketing fluff. Just real-world lessons from the front lines of software development.

Here’s a preview of what we’re excited about so far:

Architecting for a Sovereign and Secure Europe

We know that data sovereignty and supply-chain security are top of mind for every architect in the EU. That’s why we’ve invited Markus Ostertag, Chief AWS Technologist at adesso, to give us a technical look at the forthcoming AWS European Sovereign Cloud.

We’ve also tasked Soroosh Khodami, Solution Architect at Code Nomads, with providing a hands-on checklist for hardening your CI/CD pipelines against the next big breach. These sessions are designed to give you the pragmatic strategies you need to build secure-by-design systems.

Moving from AI Hype to Real-World Impact

AI is reshaping our industry, but the real challenge lies in translating theory into tangible value. We’ve brought together speakers who are doing just that.

We’re excited for Patrick Debois to explore how AI is shifting the senior developer’s role from pure implementation to strategic intent. To ground this in reality, Mariia Bulycheva, Senior Applied Scientist at Zalando, will demonstrate how they are utilizing graph neural networks for large-scale personalization.

We’re kicking things off with a keynote from Katharine Jarmul on building privacy-first ML pipelines and closing with Tejas Kumar from DataStax, who will explore what’s next for Generative AI. This is about giving you the tools to lead your team’s AI adoption responsibly and effectively.

Building Resilient Systems Under Pressure

Finally, we aim to demonstrate what it truly takes to build resilient, high-performance systems. We’re particularly looking forward to hearing from Chris Tacey-Green, Head of Engineering at Investec. He won’t just show us the successful event-driven patterns behind their real-time payment system; he’s promised to share the scars and critical trade-offs they navigated on Azure.

Similarly, Daniele Frasca of Seven.One Entertainment Group will walk us through the architectural evolution required to meet the intense demands of a live TV streaming backend.

This is just a small sample of the talks. What excites us most is bringing these innovators together in one place. The summit is your opportunity to connect with peers, validate your own architectural thinking, and walk away with ideas you can implement immediately.

If you’re grappling with these challenges, we hope you’ll join us in Munich.

The InfoQ Dev Summit Munich takes place on 15-16 October 2025. Limited early bird tickets are available now.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MongoDB, Inc. (NASDAQ:MDB) Shares Acquired by Cambridge Investment Research Advisors Inc.

MMS Founder
MMS RSS

Cambridge Investment Research Advisors Inc. grew its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 4.0% in the first quarter, according to its most recent Form 13F filing with the Securities and Exchange Commission. The institutional investor owned 7,748 shares of the company’s stock after buying an additional 298 shares during the quarter. Cambridge Investment Research Advisors Inc.’s holdings in MongoDB were worth $1,359,000 as of its most recent filing with the Securities and Exchange Commission.

Other institutional investors have also modified their holdings of the company. Strategic Investment Solutions Inc. IL acquired a new stake in shares of MongoDB in the fourth quarter valued at $29,000. Coppell Advisory Solutions LLC boosted its holdings in MongoDB by 364.0% in the fourth quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock valued at $54,000 after purchasing an additional 182 shares during the period. Smartleaf Asset Management LLC boosted its holdings in MongoDB by 56.8% in the fourth quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock valued at $87,000 after purchasing an additional 134 shares during the period. J.Safra Asset Management Corp boosted its holdings in MongoDB by 72.0% in the fourth quarter. J.Safra Asset Management Corp now owns 387 shares of the company’s stock valued at $91,000 after purchasing an additional 162 shares during the period. Finally, Aster Capital Management DIFC Ltd purchased a new position in MongoDB in the fourth quarter valued at $97,000. 89.29% of the stock is owned by institutional investors and hedge funds.

Analyst Upgrades and Downgrades

A number of research analysts recently issued reports on the company. Daiwa Capital Markets assumed coverage on MongoDB in a report on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 target price for the company. The Goldman Sachs Group decreased their target price on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a report on Thursday, March 6th. William Blair reaffirmed an “outperform” rating on shares of MongoDB in a report on Thursday, June 26th. Truist Financial decreased their price target on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Finally, Daiwa America raised MongoDB to a “strong-buy” rating in a report on Tuesday, April 1st. Eight equities research analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has issued a strong buy rating to the stock. According to MarketBeat.com, the stock currently has an average rating of “Moderate Buy” and a consensus target price of $282.47.

<!—->

Check Out Our Latest Stock Report on MongoDB

MongoDB Stock Up 3.2%

Shares of MDB stock opened at $211.05 on Friday. The stock has a market capitalization of $17.24 billion, a PE ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 52-week low of $140.78 and a 52-week high of $370.00. The stock’s 50-day moving average price is $194.66 and its 200 day moving average price is $215.72.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The firm had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter in the previous year, the firm posted $0.51 EPS. The firm’s revenue for the quarter was up 21.8% on a year-over-year basis. Analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

Insiders Place Their Bets

In other news, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction that occurred on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total transaction of $468,000.00. Following the sale, the director owned 1,107,006 shares of the company’s stock, valued at $259,039,404. The trade was a 0.18% decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this hyperlink. Also, Director Hope F. Cochran sold 1,174 shares of the business’s stock in a transaction that occurred on Tuesday, June 17th. The stock was sold at an average price of $201.08, for a total value of $236,067.92. Following the sale, the director directly owned 21,096 shares in the company, valued at $4,241,983.68. This trade represents a 5.27% decrease in their position. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 28,999 shares of company stock worth $6,728,127. Company insiders own 3.10% of the company’s stock.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Janney Montgomery Scott LLC Buys Shares of 2,612 MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Janney Montgomery Scott LLC bought a new position in MongoDB, Inc. (NASDAQ:MDBFree Report) during the 1st quarter, according to its most recent disclosure with the Securities and Exchange Commission. The institutional investor bought 2,612 shares of the company’s stock, valued at approximately $458,000.

A number of other hedge funds and other institutional investors also recently made changes to their positions in MDB. Vanguard Group Inc. boosted its stake in shares of MongoDB by 0.3% during the 4th quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after acquiring an additional 23,942 shares in the last quarter. Franklin Resources Inc. lifted its holdings in MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after purchasing an additional 181,962 shares during the last quarter. Geode Capital Management LLC boosted its position in MongoDB by 1.8% during the fourth quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock worth $290,987,000 after purchasing an additional 22,106 shares during the period. First Trust Advisors LP grew its holdings in MongoDB by 12.6% during the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after purchasing an additional 95,893 shares during the last quarter. Finally, Norges Bank bought a new position in shares of MongoDB in the fourth quarter valued at approximately $189,584,000. 89.29% of the stock is owned by hedge funds and other institutional investors.

Wall Street Analysts Forecast Growth

Several research firms recently issued reports on MDB. Macquarie reaffirmed a “neutral” rating and set a $230.00 price objective (up from $215.00) on shares of MongoDB in a report on Friday, June 6th. DA Davidson restated a “buy” rating and set a $275.00 price target on shares of MongoDB in a report on Thursday, June 5th. UBS Group raised their price target on MongoDB from $213.00 to $240.00 and gave the company a “neutral” rating in a research report on Thursday, June 5th. Royal Bank Of Canada reissued an “outperform” rating and set a $320.00 price objective on shares of MongoDB in a research report on Thursday, June 5th. Finally, Oppenheimer decreased their target price on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Eight investment analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has given a strong buy rating to the stock. According to MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and a consensus target price of $282.47.

Get Our Latest Stock Analysis on MongoDB

Insider Activity at MongoDB

In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total transaction of $236,067.92. Following the completion of the sale, the director owned 21,096 shares of the company’s stock, valued at approximately $4,241,983.68. The trade was a 5.27% decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. Also, CEO Dev Ittycheria sold 25,005 shares of the firm’s stock in a transaction on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $5,851,170.00. Following the completion of the transaction, the chief executive officer owned 256,974 shares in the company, valued at $60,131,916. This represents a 8.87% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 28,999 shares of company stock worth $6,728,127 over the last ninety days. Company insiders own 3.10% of the company’s stock.

MongoDB Stock Performance

NASDAQ MDB opened at $211.05 on Friday. The stock has a market capitalization of $17.24 billion, a P/E ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 52 week low of $140.78 and a 52 week high of $370.00. The stock has a fifty day moving average of $194.66 and a 200-day moving average of $215.72.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter last year, the business earned $0.51 earnings per share. The firm’s quarterly revenue was up 21.8% compared to the same quarter last year. Equities research analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

These 7 Stocks Will Be Magnificent in 2025 Cover

Discover the next wave of investment opportunities with our report, 7 Stocks That Will Be Magnificent in 2025. Explore companies poised to replicate the growth, innovation, and value creation of the tech giants dominating today’s markets.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

SQL to NoSQL: Modernizing data access layer with Amazon DynamoDB – AWS

MMS Founder
MMS RSS

In Part 1 of our series, we explored how to effectively migrate from SQL to Amazon DynamoDB. After establishing data modeling strategies discussed in Part 2, we now explore key considerations to analyze and design filters, pagination, edge cases, and aggregations, building upon the data models designed to create an efficient data access layer. This component bridges your application with DynamoDB features and capabilities.

The transition from SQL-based access patterns to a DynamoDB API-driven approach presents opportunities to optimize how your application interacts with its data layer. This final part of our series focuses on implementing an effective abstraction layer and handling various data access patterns in DynamoDB.

Redesign the entity model

The entity model, which represents the data structures in your application, will need to be redesigned to match the DynamoDB data model. This might involve de-normalizing the models and restructuring relationships between entities.In addition, consider the effort involved in the following configurations:

  • DynamoDB attribute annotation– Annotate entity properties with DynamoDB specific attributes, including partition key, sort key, local secondary index (LSI) information, and global secondary index (GSI) information. For example, using a .NET object persistence model requires mapping your classes and properties with DynamoDB tables and attributes.
  • Key prefix configuration – In a single table design, you might have to configure partition and sort key prefixes for your entity models. Analyze how these prefixes will be used for querying within your data access layer. The following code is a sample implementation of key prefix configuration in entity models:
public class Post
{
    private const string PREFIX = "POST#";
    
    public string Id { get; private set; }
    public string Content { get; private set; }
    public string AuthorId { get; private set; }

    public Post(string id, string content, string authorId)
    {
        Id = id;
        Content = content;
        AuthorId = authorId;
    }

    // Property that automatically adds prefix
    public string PartitionKey => $"{PREFIX}{Id}";
}

// Usage example
var post = new Post("123", "Hello World", "USER#456");
var queryKey = post.PartitionKey; // Gets "POST#123"

  • Mapping rule redesign – Due to changes in your entity models, existing mapping rules between your application’s view models and the entity models might need to be redesigned.

Designing the DynamoDB API abstraction layer

The DynamoDB API abstraction layer encapsulates the underlying DynamoDB operations while providing your application with a clean interface. Let’s explore what you might need to implement in this layer.

Error handling and retries

High-traffic scenarios often lead to transient failures that need handling. For instance, during viral content surges or when a celebrity post gains sudden attention, you might encounter throughput exceeded exceptions. You might need to implement the following:

Batch operation management

Applications often need to process multiple items efficiently to provide a good user experience. Consider scenarios like loading a personalized news feed that combines posts from multiple followed users. You might need to implement the following:

  • Automatic chunking of requests within DynamoDB limits
  • Parallel processing for performance optimization
  • Recovery mechanisms for partial batch failures
  • Progress tracking for long-running operations

Loading related entity data

When migrating from a relational database to DynamoDB, a common perception is relational data is often denormalized and related data access becomes straightforward. However, this isn’t always true. Although in some cases, relationships might be modeled using a single-item modeling strategy, based on cost and performance considerations, the relationships might have been modeled using different strategies like vertical partitioning or composite sort keys.

When adapting to DynamoDB, you might have to develop helper methods in your abstraction layer to load the relational data of an entity (navigation properties) efficiently. These methods need to consider your application architecture, access patterns, and data modeling strategies. For example, in our social media application, loading comments for a post might require different approaches based on the chosen modeling strategy—from simple attribute retrieval in single-item models to query operations in vertical partitioning.

For entities related using a single-item strategy, specific loading logic might not be necessary because all data is retrieved in a single API operation. However, for other modeling strategies like vertical partitioning, your abstraction layer methods need to handle efficient querying based on filter conditions and pagination. For instance, when comments are stored as separate items sharing the post’s partition key, the method must efficiently query and paginate through the related items.

Building upon the batch operation capabilities, you can extend these methods to handle loading related data for multiple items. For example, when loading comments for multiple posts, use BatchGetItem to do the following:

  • Use established batching mechanisms to group requests
  • Apply retries and error handling strategies
  • Provide consistent interfaces for both single and bulk operations

When using GSIs, you might need to retrieve additional attribute data not included in the GSI projection. Design strategies to efficiently load the required data while minimizing API calls and optimizing performance and cost.Your abstraction layer method might have to provide the following:

  • Consistent interfaces for loading related data
  • Optimization of API calls and cost
  • Simplified maintenance through centralized implementation

The following code is a sample implementation of loading navigation properties:

// Entity with navigation property
public class Post
{
    public string Id { get; set; }
    public string Content { get; set; }
    public IEnumerable Comments { get; set; }
}

// Interface for loading related data
public interface INavigationPropertyManager 
{
    Task<IEnumerable> LoadRelatedItemsAsync(string parentId);
    Task<IDictionary<string, IEnumerable>> LoadRelatedItemsInBatchAsync(IEnumerable parentIds);
}

// Service using the loader
public class PostService
{
    private readonly INavigationPropertyManager _navigationPropertyManager;

    public PostService(INavigationPropertyManager navigationPropertyManager)
    {
        _navigationPropertyManager = navigationPropertyManager;
    }
    
    public async Task<IEnumerable> GetPostCommentsAsync(string postId)
    {
        return await _navigationPropertyManager.LoadRelatedItemsAsync(postId);
    }
}

When designing these methods, analyze your current application’s loading patterns and evaluate whether maintaining similar patterns in DynamoDB can benefit your application’s performance and user experience.

Response mapping

As applications evolve, their data structures and requirements change over time. For instance, when adding new features like post reactions beyond simple likes, or introducing rich media content in user profiles, backward compatibility becomes crucial. You might need to implement mapping logic to perform the following functions:

  • Convert DynamoDB items to domain objects
  • Handle backward compatibility as data models evolve
  • Manage default values for missing attributes
  • Support different versions of the same entity

Filter expression building

Complex data retrieval needs often arise in modern applications. For instance, when users want to find posts from a specific time frame that have gained significant engagement, or when filtering comments based on user interaction patterns. Your abstraction layer might need to do the following:

  • Convert complex search criteria into DynamoDB filter expressions
  • Handle multiple filter conditions dynamically
  • Manage expression attribute names and values
  • Support nested attribute filtering

Pagination implementation

Efficient data navigation is important for user experience. Consider scenarios like users scrolling through their infinite news feed, or moderators reviewing comments on viral posts. You might need to implement the following:

  • Token-based pagination using LastEvaluatedKey
  • Configurable page size handling
  • Efficient large result set processing
  • Consistent pagination behavior across different queries

The following code is a sample implementation of pagination:

// Enhanced interface adding pagination support
public interface INavigationPropertyManager 
{
    Task<IEnumerable> LoadRelatedItemsAsync(string parentId);
    Task<IDictionary<string, IEnumerable>> LoadRelatedItemsInBatchAsync(IEnumerable parentIds);
    // method for paginated loading
    Task<PagedResult> LoadRelatedItemsPagedAsync(string parentId, PaginationOptions options);
}

public class PaginationOptions
{
    public int PageSize { get; set; } = 20;
    public string ExclusiveStartKey { get; set; }
}

public class PagedResult
{
    public IEnumerable Items { get; set; }
    public string LastEvaluatedKey { get; set; }
}

// With pagination support
public class PostService
{
    private readonly INavigationPropertyManager _navigationPropertyManager;
    public PostService(INavigationPropertyManager navigationPropertyManager)
    {
        _navigationPropertyManager = navigationPropertyManager;
    }
    
    public async Task<PagedResult> GetPostCommentsPagedAsync(
        string postId, 
        int pageSize = 20, 
        string nextToken = null)
    {
        var options = new PaginationOptions 
        { 
            PageSize = pageSize,
            ExclusiveStartKey = nextToken
        };
        
        return await _navigationPropertyManager.LoadRelatedItemsPagedAsync(postId, options);
    }
}

Data encryption

Protecting sensitive user data is paramount in modern applications. You might need to implement the following:

Observability

Monitoring application health and performance is essential. When tracking viral post performance or user engagement patterns during peak usage times, detailed insights become important. Consider monitoring the following Amazon CloudWatch metrics:

  • Request latency tracking – Monitor DynamoDB metrics like SuccessfulRequestLatency, and create custom metrics to track latency because of the exceptions such as TransactionConflict and ConditionalCheckFailedRequests
  • Capacity consumption – Track ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits
  • Error rates and patterns – Monitor ConditionalCheckFailedRequests, SystemErrors, UserErrors, and related metrics
  • Query performance – Track ThrottledRequests, ReadThrottleEvents, WriteThrottleEvents, and custom metrics to monitor query or scan efficiency (ScannedCount or Count), client-side filtering duration, and external service call latencies

Transaction management

Maintaining data consistency is critical in many scenarios. When updating user profiles along with their post metadata, or managing comment threads with their associated counters, transactional consistency becomes important. You might need to implement the following:

  • Transactional operation handling
  • Timeout and conflict management
  • Compensation logic for failed transactions

This abstraction layer helps your application interact with DynamoDB efficiently while maintaining clean separation of concerns and consistent behavior across all data access operations. When implementing these features in your abstraction layer, consider approaches to monitor and optimize their effectiveness. For instance, you can implement a centralized error tracking mechanism using custom CloudWatch metrics for different DynamoDB operations. These insights can help continuously improve your abstraction layer’s reliability and performance.

Handling filters

After you design your DynamoDB API abstraction layer with core operations and data loading capabilities, analyze how to adapt existing query patterns to align with the DynamoDB querying approach. As a first step, examine how query filter conditions transition from relational SQL querying to DynamoDB patterns.

Whereas relational databases use query optimizers for WHERE clause filters, DynamoDB empowers developers with precise control over query execution through its purposeful design of base tables and indexes. This design enables predictable and consistent performance at scale.

DynamoDB processes queries in a two-step manner. First, it retrieves items that match the key condition expression against partition and sort keys. Then, before returning the results, it applies filter expressions on non-key attributes. Although filter expressions don’t reduce RCU consumption as the entire result set is read before filtering, they reduce data transfer costs and improve application performance by filtering data at the DynamoDB service level.

Analyze your application’s data access patterns to optimize your queries for this two-step process. Consider developing a design approach that facilitates seamless translation to DynamoDB expression statements, which improves productivity when rewriting a large set of queries. Build upon your DynamoDB API abstraction layer’s helper methods for constructing key conditions and filter expressions. For example, in our social media application, we developed methods that handle common filtering scenarios like date range filters or engagement metric thresholds. These methods can be combined and reused across different query requirements, reducing development effort and maintaining consistency in how filters are applied.

Handling complex filter requirements

DynamoDB flexible expression capabilities handle many filtering scenarios directly, and you can implement client-side filtering for any additional requirements. Some examples include:

  • Unsupported functions or methods – When working with filters that reference system or user-defined functions, retrieve the data from DynamoDB and apply these specialized filters at the application layer. For SQL queries that use functions like string operations (SUBSTRING, CONCAT), date/time calculations (DATEADD, DATEDIFF), or mathematical functions (ROUND, CEILING), retrieve the base data and apply these operations in your application layer. Consider designing pre-calculated attributes during data model design to avoid client-side filtering that can impact performance.
  • Loading related entity data – For queries that filter based on attributes from related entities, your application might need to load data from multiple DynamoDB tables or item collections and apply filters at the application layer. For example, when finding posts based on author characteristics or comment patterns, design efficient data retrieval strategies and consider whether denormalization might be appropriate for frequently accessed patterns.
  • Integrating with external data sources – In microservice architectures, filtering might require data from other services or databases. Design efficient data retrieval strategies and consider implementing appropriate caching mechanisms to minimize the performance impact of cross-service filtering. Analyze these scenarios to determine the best approach for your specific use case.

Let’s examine the use case of retrieving post comments by active authors and sentiment score, requiring data from an external user service and analytics database:

/*
Original SQL Query demonstrating filters across different data sources:
SELECT c.*, u.name, u.profile_pic, u.status, m.sentiment_score
FROM comments c
JOIN users u ON c.user_id = u.id 
JOIN comment_analytics m ON c.id = m.comment_id
WHERE c.post_id = '123'
  AND c.created_at > DATEADD(year, -1, GETUTCDATE())
  AND u.status = 'ACTIVE'
  AND m.sentiment_score > 0.8
*/

public class Comment
{   
    [DynamoDBHashKey]
    public string PostId { get; set; }
    [DynamoDBRangeKey]
    public string CreatedAt { get; set; }
    [DynamoDBProperty]
    public string CommentId { get; set; }
    [DynamoDBProperty]
    public string UserId { get; set; }
    [DynamoDBProperty]
    public string Content { get; set; }
}

public class PostCommentService
{
    private readonly IDynamoDBContext _dynamoDbContext;
    private readonly IUserService _userService;
    private readonly ICommentAnalytics _analyticsDb;

   //Initialize readonly fields in constructor
   
    public async Task<IEnumerable> GetPostCommentsAsync(
        string postId, 
        DateTime startDate,
        double minSentimentScore)
    {
        // Step 1: Query DynamoDB for comments
        var comments = await _dynamoDbContext.QueryAsync(postId,
                    QueryOperator.GreaterThanOrEqual,
                    new[] { startDate.ToString("yyyy-MM-dd") })
                .GetRemainingAsync();
                                
        // Step 2: Get user details and filter by active status
        var userIds = comments.Select(c => c.UserId).Distinct();
        var userDetails = await _userService.GetUserDetailsAsync(userIds);
        comments = comments.Where(c => userDetails[c.UserId].Status == "ACTIVE");

        // Step 3: Apply sentiment score filter from analytics
        var commentIds = comments.Select(c => c.CommentId);
        var sentimentScores = await _analyticsDb.GetSentimentScoresAsync(commentIds);
        
        return comments.Where(c => sentimentScores[c.CommentId] > minSentimentScore);
    }
}

When analyzing your existing queries, identify scenarios requiring client-side filtering and evaluate their performance implications. This analysis helps you do the following:

  • Estimate development effort
  • Plan optimization strategies
  • Determine caching needs
  • Assess impact on response times

Consider these factors while designing your data access layer to achieve efficient query handling in your DynamoDB implementation. As you implement your design, consider approaches to monitor and optimize filter operations. For instance, you can track metrics about filter usage patterns and their performance impact, helping you validate your implementation decisions and identify optimization opportunities as your application evolves.

Handling pagination

Evaluate your application’s current pagination strategy and align it with DynamoDB capabilities. Whereas relational database applications often display total page numbers to users, DynamoDB is optimized for forward-only, key-based pagination using LastEvaluatedKey. Because implementing features like total record counts requires full table scans, consider efficient alternatives that take advantage of DynamoDB strengths. Discuss with stakeholders how pagination approaches like cursor-based navigation or “load more” patterns can provide excellent user experience while maintaining optimal performance.

For applications requiring result set size context, to obtain item counts in DynamoDB, consider implementing counters instead of calculating real-time totals. In our social media application, we store and update post counts per user during write operations, allowing us to show information like “Viewing 50 of approximately 1,000 posts” without requiring full table scans. However, these counters become less accurate when queries include filters. For common, predefined filters, separate counters can be maintained (e.g., posts_count_last_30_days). For dynamic filter combinations, consider alternative patterns such as infinite scroll that align better with DynamoDB’s pagination model while providing good user experience.

When designing pagination in your data access layer for DynamoDB, understand its core pagination behavior. DynamoDB might not return all matching items in a single API call due to two key constraints: the "Limit" parameter and the 1 MB maximum read size. Consequently, your implementation needs to handle multiple API calls using LastEvaluatedKey to fulfill pagination requirements. Design your data access layer to manage this process transparently, maintaining a clean separation between pagination mechanics and business logic.

Consider the following factors when implementing DynamoDB pagination:

  • Filtering impact analysis – Evaluate your query filters, including those applied through filter expressions or client-side filtering. Assess the cardinality of your data to understand what percentage of query results are filtered out. This analysis helps determine an appropriate "Limit" parameter that aligns with your application’s page size needs while accounting for filtered results.
  • Limit parameter optimization – Setting the limit parameter requires careful consideration of tradeoffs. Setting it too low might lead to unnecessary API calls, impacting performance. Conversely, setting it too high might retrieve excess data, also affecting performance and cost. Aim for a limit that closely matches your desired page size while accounting for filtering effects.
  • Performance monitoring – Implement proper monitoring for your pagination implementation to track efficiency metrics like the number of API calls per page request and average response times. Use this data to fine-tune your pagination parameters and identify opportunities for optimization. Consider implementing appropriate caching strategies for frequently accessed pages to improve performance further.

By considering these aspects and maintaining proper monitoring, you can implement an efficient pagination process that optimizes data retrieval while effectively managing performance and costs. For instance, you can track metrics like the average number of DynamoDB calls per page request and result set distributions. These insights can help fine-tune your implementation parameters and identify opportunities for optimization as your application grows.

Handling edge cases

When migrating your data access layer to DynamoDB, identify and address edge cases that involve large-scale data operations. Understanding and planning for these edge cases helps make sure your DynamoDB implementation remains performant and cost-effective under extreme conditions:

  • Predictable high-volume operations – Consider a scenario where a user with millions of followers posts content, requiring updates to news feeds or notification tables for all followers. These are operations where we can determine the scale in advance based on known factors like follower count. Design patterns like write sharding or batch processing can help manage these scenarios effectively. For instance, you might implement a fan-out-on-read approach for high-follower accounts instead of updating all follower feeds immediately.
  • Unexpected scale events – Some operations can experience sudden, unpredictable spikes in activity. For example, when a post unexpectedly receives massive engagement, generating thousands of reads and writes per second. Unlike predictable high-volume operations where we can plan our data model and access patterns in advance, these scenarios require strategies like dynamic scaling, caching, and asynchronous patterns to handle sudden load spikes while maintaining application performance.

When analyzing your application for edge cases, consider these factors:

  • Scale implications of high-volume operations
  • Burst capacity requirements for sudden traffic spikes
  • Cost implications of different implementation approaches
  • Performance impact on other application functions

Regular load testing and monitoring of these edge case scenarios helps validate your implementation approaches and identify potential optimizations. When implementing your edge case handling strategy, consider approaches to detect and respond to these scenarios in production. For instance, you can set up monitoring mechanisms to track partition key usage patterns and identify potential hot partition situations before they impact performance. This proactive approach makes sure your application can handle extreme conditions while maintaining performance and managing costs effectively.

Handling aggregations and de-normalized data

When migrating from relational databases to Amazon DynamoDB, aggregations and de-normalized data can have an impact on your existing command queries, which you might have to consider while redesigning in your data access layer.

Managing aggregations

Relational databases typically use JOINs and GROUP BY clauses for real-time aggregations, such as calculating total posts per user or comments per post. DynamoDB partition and sort key-based access patterns support different approaches for handling aggregations. In our social media application, we maintain aggregation entities to store pre-calculated values. For example, we store a user’s total posts, total followers, and engagement metrics as separate items that update when corresponding actions occur. This pattern can be applied to any application where real-time aggregations are frequently accessed.

When implementing aggregation strategies, analyze the following:

  • Which aggregations are frequently accessed
  • Frequency of updates to aggregated values
  • Performance requirements for aggregation queries
  • Consistency requirements for aggregated data

Handling de-normalized data

DynamoDB often requires data de-normalization based on access pattern requirements. For instance, in our application, we store user status directly on post entities to enable efficient filtering. This approach trades off increased write operations for improved read efficiency.

When analyzing de-normalization needs, consider the following:

  • Frequency of attribute access
  • Update patterns of source data
  • Impact on write operations
  • Required consistency level

Managing updates

To manage updates to aggregated entities or de-normalized attributes, you can choose between the following methods:

  • Synchronous updates – Our application uses this approach for critical user-facing features where immediate consistency is required. For example, updating like counts on popular posts uses transactions to maintain consistency, though this might impact write performance during high-traffic periods.
  • Asynchronous updates – We implement this pattern using Amazon DynamoDB Streams and AWS Lambda, which is a loosely coupled architecture with less performance impact for less time-critical updates. For instance, updating trending post rankings or user activity summaries can tolerate eventual consistency in favor of better performance.

Analytical processing

For complex analytical queries or large-scale reporting needs, consider complementary services:

By analyzing your aggregation and analytical requirements and selecting appropriate tools and approaches, you can make sure your modernized data access layer effectively handles these data processing needs while taking advantage of the strengths of DynamoDB. When implementing your aggregation strategy, consider approaches to monitor the health of your solution. For instance, you can track metrics about aggregation update latency and consistency patterns. These insights can help validate your implementation choices and make sure your aggregation strategy maintains optimal performance as your application scales.

Conclusion

In this post, we explored strategies for modernizing your application’s data access layer for DynamoDB. The transition from SQL-based patterns to a DynamoDB API-driven approach offers opportunities to optimize how your application interacts with its data.

Building on the data models designed in Part 2, we examined how to implement efficient query patterns through DynamoDB features for filtering, pagination, and aggregation. The abstraction layer patterns we discussed can help create a clean separation between your application logic and DynamoDB operations while maintaining consistent performance.

The DynamoDB approach to data access differs from traditional SQL patterns, but with proper implementation of the strategies we’ve covered—from error handling to edge cases—you can build a robust data access layer that takes advantage of DynamoDB capabilities effectively. Close collaboration between database and application teams helps create solutions that balance performance, cost optimization, and scalability.Begin implementing these patterns by creating focused proof-of-concept implementations. Test your abstraction layer design with representative workloads to validate your approach before expanding to your full application scope.


About the authors

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Google Launches Gemini CLI: Open-Source Terminal AI Agent for Developers

MMS Founder
MMS Robert Krzaczynski

Google has released Gemini CLI, a new open-source AI command-line interface that brings the full capabilities of its Gemini 2.5 Pro model directly into developers’ terminals. Designed for flexibility, transparency, and developer-first workflows, Gemini CLI provides high-performance, natural language AI assistance through a lightweight, locally accessible interface.

Gemini CLI is available today under the Apache 2.0 license, enabling developers to inspect, modify, and extend the source code. It features deep integration with Gemini Code Assist, allowing developers to seamlessly shift between IDE-based and terminal-based AI assistance using the same model backbone.
Key capabilities of Gemini CLI include:

  • Support for Gemini 2.5 Pro with a 1 million token context window
  • Prompt grounding with Google Search, enabling real-time web context integration
  • Built-in support for the Model Context Protocol (MCP) and custom system prompts (via GEMINI.md)
  • Non-interactive scripting mode, allowing terminal automation with AI as part of CI/CD workflows

Once authenticated with a personal Google account, developers can access Gemini CLI for free under a Gemini Code Assist license. Advanced users can alternatively configure Gemini CLI with API keys from Google AI Studio or Vertex AI for more control or higher-volume use cases.

Gemini CLI supports a range of developer workflows, including:

  • Writing, refactoring, and debugging code
  • Automating terminal tasks and shell scripting
  • Researching technical topics or documentation
  • Generating structured content or markdown
  • Performing local file and system-level operations

The project is intended to evolve with community input, and contributions are encouraged via the Gemini CLI GitHub repository. Google highlights that this release continues the company’s shift toward open, extensible AI tooling aimed at democratizing access to powerful models across platforms.

However, initial user feedback points to areas that still need refinement. One developer commented:

Tried a bit just now; for my not-too-difficult task, it firstly searched a codebase for 4 minutes, then ended up asking to explore the code in another codebase, to which all calls were commented out. Doesn’t feel close to Claude Code yet.

Another Reddit user added:

Well, it is fine until 5 minutes into the session, when it switches the model to flash, which is entirely awful at coding.

For developers who prefer working in an IDE, Gemini Code Assist now shares agent technology with Gemini CLI. This includes multi-step planning, auto-recovery, and reasoning-based code generation in VS Code, offered free across all tiers.

Gemini CLI is available today at cli.gemini.dev and requires only a Google login to get started.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.