Month: June 2025

MMS • Ben Linders

Becoming a principal engineer requires more than technical skill, it’s about influence, communication, and strategy. Success means enabling teams by shaping culture, Sophie Weston said. In her talk at QCon London, she suggested developing deep skills in multiple domains, with general, collaborative skills. Skills from life outside work, like sports, volunteering, or gaming, can add valuable perspective and build leadership potential.
Principal engineer is often the highest level of the individual contributor path. Getting there needs more than just having deep technical expertise, as Weston explained:
It’s about influence, communication, and strategy. It’s about understanding that your career isn’t just about climbing; it’s about navigating, adapting, and growing in ways that matter to you.
Principal engineers focus less on what teams build and more on how they build, Weston said. It’s helping teams do great work, creating an environment where people thrive. By reducing friction, they help teams and the wider organisation move faster, avoid unnecessary obstacles, and stay on track.
According to Weston, engineering leadership consists of:
- Setting technical direction. This is about guiding teams and helping them to make smart architectural and technical choices.
- Driving good engineering practices. Helping teams build software efficiently and deliver real value – both to the business and its customers.
- Shaping culture. Mentoring, coaching, and making the organisation a place where engineers thrive.
Weston mentioned that as a principal engineer, she’s passionate about creating a good engineering culture that is good both for the organisation to deliver value effectively and efficiently and for the people working in it:
We want to have impressive DORA metrics, but not at the expense of burning people out.
Engineers need to broaden their skill set if they want to advance towards engineering leadership. Technical knowledge, no matter how deep, no matter how impressive, is not going to be enough, Weston said. She suggested developing deep skills in multiple domains, with general, collaborative skills as well:
Being Pi-shaped is valuable, but leadership demands more breadth. You need to become a “broken comb”—someone with expertise in multiple areas and the ability to connect insights across domains.
As a “broken comb”, you don’t just have expertise in multiple areas; you have varying depths of knowledge to bridge gaps, connect ideas, and solve problems creatively, Weston said.
Weston argued that people should bring their whole selves to work. She mentioned things that people do outside of their job, where they can learn and practice useful leadership skills:
If you’re part of a sports team, you will be building skills in team working and resilience. If you do volunteering, say in a youth group for example, you’ll develop skills in coaching and problem-solving. Maybe gaming is your thing, in which case you are likely to have strong skills in strategic thinking and adaptability.
Getting involved in the tech community and helping to organise events is a fantastic way to learn and practice new skills, Weston mentioned.
Don’t undervalue skills that you learn in other parts of your life and how they can help you in your career journey. We often talk about the importance of psychological safety and the need for people to be able to bring their whole selves to work and be themselves at work, this applies in a wider career context too, Weston said.
The skills you learn outside of work are equally valuable as the ones you learn through actually doing your job, Weston said. Sometimes they are more valuable because they represent more teeth on your “broken comb”. It’s not just the skills themselves that are valuable, but the additional perspective you have from having acquired them in a different setting, Weston concluded.

MMS • Daniel Curtis

Vitest, the modern Vite-native test runner, has introduced Vitest Browser Mode, offering developers an alternative to traditional DOM simulation libraries like JSDOM. The addition of browser mode to Vitest allows tests to run in an actual browser context, offering more realistic and reliable testing behavior for UI applications built with React, Vue, or Svelte.
Vitest Browser Mode is currently experimental.
Vitest Browser Mode was introduced to help improve testing with more accurate and reliable test results, it does this by running tests in a real browser context using Playwright or WebDriverIO. This mode allows for realistic browser rendering and interaction.
Historically, JSDOM has been the default simulated environment for running front-end tests in Node.js. It simulates a browser DOM inside Node, making it a convenient and fast option for unit testing. However, due to the fact that JSDOM isn’t a real browser, its implementation can sometimes fall short for advanced use cases, such as layout calculations, CSS behavior, or APIs not yet supported in JSDOM. Vitest aims to replace JSDOM environments with an easy migration path.
React Testing Library, a lightweight library for testing React components, is built on top of the DOM Testing Library, which provides utilities to interact with the DOM. It has long relied on JSDOM for simulating DOM interaction. With the introduction of Vitest Browser Mode, it is possible to migrate away from React Testing Library as a number of the APIs have been natively rewritten in the same familiar pattern of React Testing Library. Kent C. Dodds, the author of React Testing Library, says he has never been so happy to see people uninstalling React Testing Library in favor of the native implementation.
Vitest also provides support for other frameworks, such as Vue and Svelte. There is also a community package available for Lit. It supports multiple different browser environments depending on which platform you use, if you opt for WebDriverIO, it supports testing in four different browsers, Firefox, Chrome, Edge and Safari. Playwright supports Firefox, Webkit and Chromium.
There are some drawbacks to using Vitest Browser Mode, as outlined in their documentation, such as it being in experimental mode and therefore still early in its development. It can also have longer initialization times compared to other testing patterns.
Vite is an open-source, platform-agnostic build tool named after the French word for ‘quick’. It was written by Evan You, the creator of VueJS. Vitest is a next generation Vite-native framework that reuses Vite’s config and plugins; it supports ESM, TypeScript and JSX out of the box.
Full documentation for Browser Mode is available on the Vitest website including setup guides and examples.

MMS • Robert Krzaczynski

Perplexity has released Labs, a new feature for Pro subscribers designed to support more complex tasks beyond question answering. The update marks a shift from search-based interactions toward structured, multi-step workflows powered by generative AI.
Perplexity Labs enables users to perform a wide range of tasks, including generating reports, analyzing data, writing and executing code, and building lightweight web applications, all within a single interface. Users can access Labs via a new mode selector available on web and mobile platforms, with desktop support coming soon.
While Perplexity Search focuses on concise answers and Research (formerly Deep Research) offers more in-depth synthesis, Labs is designed for users who need finished outputs. These can include formatted spreadsheets, visualizations, interactive dashboards, and basic web tools.
Each Lab includes an Assets tab, where users can view or download all generated materials, including charts, images, CSVs, and code files. Some Labs also support an App tab that can render basic web applications directly within the project environment.
According to Aravind Srinivas, CEO and co-founder of Perplexity:
Introducing Perplexity Labs: a new mode of doing your searches on Perplexity for much more complex tasks like building trading strategies, dashboards, headless browsing tasks for real estate research, building mini-web apps, storyboards, and a directory of generated assets.
In practical terms, Labs automates and combines tasks that would otherwise require multiple software tools and considerable manual input. This is particularly relevant for tasks involving structured research, data processing, or prototyping.
Initial feedback has highlighted the speed and contextual accuracy of the platform. Sundararajan Anandan shared:
I recently tried Perplexity Labs, and it is a game-changer. Tasks that once took hours of manual research and formatting were distilled into crisp, actionable insights in under 10 minutes. While it is still early and the platform will need time to mature, the initial experience is genuinely impressive.
However, some early users have pointed out areas for improvement. In particular, follow-up interactions and code revisions after the initial generation are currently limited. As one Reddit user commented:
The biggest problem with Labs is that it doesn’t handle follow-ups very well. It basically requires you to be a one-shotting ninja.
The company has also announced that it is standardizing terminology, renaming “Deep Research” to simply “Research” to clarify the distinctions between the three modes: Search, Research, and Labs.
Perplexity Labs is now live and available to all Pro users. Additional examples and use cases are available via the platform’s Projects Gallery, designed to help users get started with practical tasks.

MMS • Craig Risi

In a recent blog post, Pinterest Engineering detailed its approach to addressing network throttling challenges encountered while operating on Amazon EC2 instances. As a platform serving over 550 million monthly active users, ensuring consistent performance is paramount, especially for critical services like their machine learning feature store, KVStore.
Pinterest observed increased latency and occasional service disruptions in KVStore, particularly during periods of high traffic. These issues often led to application timeouts and cascading failures, adversely affecting user engagement on features like the Homefeed. The root cause was traced to network performance limitations inherent in certain EC2 instance types, which offer “up to” a specified bandwidth. For example, an instance labeled with “up to 12.5 Gbps” might have a baseline bandwidth significantly lower, relying on burst capabilities that are not guaranteed. When network usage exceeded these baselines, packet delays and losses ensued, impacting application performance.
In 2024, Pinterest initiated a migration to AWS’s Nitro-based instance families, such as transitioning from i3 to i4i instances, aiming for improved performance. However, this shift introduced new challenges. During bulk data uploads from Amazon S3 to their wide-column databases, they observed significant performance degradation, particularly in read latencies, resulting in application timeouts. These findings prompted a temporary halt to the migration of over 20,000 instances.
With improved visibility into their network performance, Pinterest implemented several key strategies to mitigate EC2 network throttling. One of the primary approaches was selecting EC2 instances with higher baseline network bandwidth to better support their workloads, moving away from instances that only promised burstable performance. They also introduced traffic shaping techniques to regulate data flow and ensure network usage stayed within optimal thresholds.
In addition, Pinterest distributed workloads more evenly across multiple instances, reducing the risk of overloading any single resource. These combined efforts significantly enhanced the reliability and stability of their systems, effectively minimizing latency spikes and preventing the kind of service disruptions that had previously impacted user experience.
Pinterest’s experience underscores the importance of understanding the nuances of cloud infrastructure, particularly the implications of network bandwidth limitations on EC2 instances. By proactively monitoring and adjusting their infrastructure, they successfully navigated the challenges of network throttling, ensuring a smoother experience for their vast user base.

MMS • Steef-Jan Wiggers

Virt8ra, a significant European initiative positioning itself as a major alternative to US-based cloud vendors, has announced a substantial expansion of its federated infrastructure. The platform, which initially included Arsys, BIT, Gdańsk University of Technology, Infobip, IONOS, Kontron, MONDRAGON Corporation, and Oktawave, coordinated by OpenNebula Systems, has now been joined by six new cloud service providers: ADI Data Center Euskadi, Clever Cloud, CloudFerro, OVHcloud, Scaleway, and Stackscale.
Quentin Adam, CEO of Clever Cloud, commented:
By championing open source and empowering local providers across the EU, Virt8ra is not only helping to reduce our continent’s reliance on hyperscalers and Big Tech vendors, but also laying foundations for a truly autonomous and innovative digital future.
Launched in January 2025 by a consortium of eight European tech organizations coordinated by OpenNebula Systems, Virt8ra aims to establish a sovereign and interoperable cloud ecosystem across the European Union. Leveraging the open-source OpenNebula cloud technology, the initiative prioritizes data localization, flexibility, and vendor independence. The initial phase offered compute and storage resources across six EU member states: Croatia, Germany, the Netherlands, Poland, Slovenia, and Spain.
A commenter on Reddit pointed out the broader challenge, stating:
Infrastructure AND the associated software, if I may add! One of the strengths of Azure and AWS, justifying their services, which are 10 times as expensive as those of OVH or Scaleway, is the huge software suite that comes with it. With LLMs and the associated productivity gains, I guess Europe could catch up quite quickly. However, we need to create strong incentives for European companies to switch to European cloud providers.
Just three months later, this expansion significantly broadens Virt8ra’s reach and capacity. Dr. Ignacio M. Llorente, CEO of OpenNebula Systems and Chair of the Cloud-Edge Working Group at the European Alliance for Industrial Data, Edge, and Cloud, emphasized Virt8ra’s importance as “a key step toward building a sovereign and interoperable cloud ecosystem in Europe.”
Furthermore, Virt8ra is designed to facilitate the easy deployment of distributed applications spanning the cloud-edge continuum, with a strong emphasis on AI and machine learning. The infrastructure is currently being validated through enterprise use cases that demonstrate innovations in AI-enabled orchestration, mechanisms for Data Act-compliant cloud migration and data egress, and the future potential for multi-tenant “AI-as-a-Service” offerings for both inference and training.
In addition, the collaborative effort is taking place within the framework of the Important Project of Common European Interest on Next Generation Cloud Infrastructure and Services (IPCEI-CIS). The IPCEI-CIS, approved by the European Commission in December 2023 and supported by 12 EU Member States, represents the European Union’s largest open-source project to date, backed by over €3 billion in public and private funding. Dr. Alberto P. Martí, chair of the IPCEI-CIS Industry Facilitation Group, commented:
I’m proud to see the EU industry finally taking the lead in developing strategic open-source technologies and delivering tangible solutions for creating sovereign digital infrastructure across Europe.
The ultimate objective of Virt8ra is to integrate and validate a European sovereign virtualization stack built around OpenNebula, delivering an open-source, vendor-neutral solution for managing the entire cloud-edge continuum. An objective that might empower EU businesses and public organizations, strengthen their digital sovereignty, and reduce their reliance on non-European hyperscalers and Big Tech vendors.

MMS • Trisha Gee Holly Cummins

Key Takeaways
- Prioritize developer joy by encouraging experimentation, creative play, and regular breaks, which activate deeper thinking, accelerate learning, and enhance problem-solving.
- Acknowledge and support the full scope of a developer’s role, including communication, collaboration, and troubleshooting, to improve alignment and software quality.
- Identify and eliminate sources of friction such as flaky tests, redundant meetings, and inefficient tools to protect developer flow and maximize productivity.
- Recognize that while AI can generate code quickly, its output may lack the concision and adherence to best practices of human-authored code, requiring careful review and investment in developers’ code reading skills and organizational standards.
- Adopt thoughtful measurement and tooling practices, including responsible AI use, to improve code quality and team outcomes, not just to increase output.
If you’ve ever solved a bug in the shower or had a breakthrough idea while unloading the dishwasher, you’re not alone. For software developers, productivity doesn’t always look like heads-down typing. In fact, according to developers and thought leaders Trisha Gee and Holly Cummins, the best code often starts with a bit of fun, or at least a little time away from the computer.
In this article, Holly and Trisha explore why joy isn’t a distraction from productivity: it’s the secret ingredient. From debugging brain waves in the middle of a jog to cutting out test flakiness, Trisha and Holly explain how to reclaim developer satisfaction and boost output by embracing curiosity, minimizing friction, and giving ourselves a break.
The Joy-Productivity Connection
There is a lingering myth in enterprise environments that productivity and fun are at odds, that leaders have to choose between having a happy team and achieving business results. This is just not true. High-performing teams are happier because they’re thriving, not because they’re slacking off. In fact, not only are productive developers happy, happy developers are productive.
Play isn’t just fluff; it’s a tool. Whether it’s trying something new in a codebase, hacking together a prototype, or taking a break to let the brain wander, joy helps developers learn faster, solve problems more creatively, and stay engaged.
These benefits are backed up by research that consistently shows that happiness, joy, and satisfaction all lead to better productivity (see the research on developer joy and productivity at the end of the article (^1) (^2) (^3)). And when companies chase productivity without considering joy, the result is often burnout and lower output.
Developers Don’t Just Code (and That’s Okay)
Despite what your calendar might say, coding isn’t the only thing you do as a dev. In fact, it might not even be the thing you do most. Surveys find that developers spend less than half their time writing code. This is backed up by Trisha’s informal polling, which finds that it is less than 30 percent.
Developers also spend time in meetings, being in discussions on Slack, and wading through email. On the technical side, time is spent managing other parts of the software process like performing code reviews, troubleshooting issues, writing, running, and troubleshooting tests, and maintaining code or systems.
Software development is not and never has been only about writing code. Communicating with other parts of the business (via meetings or other media) is key to writing the right software and writing correct software. This isn’t necessarily a bad thing: good communication leads to better results. But time spent managing flaky tests, watching a build spin, or deciphering cryptic code? That’s where things get wasteful.
Friction, Toil, and the Death of Flow
So what kills joy on a dev team? Developers get irritated with overheads. This could be process overheads, such as onerous status reporting, having to put the same information into multiple systems, or low-value meetings. It could also be technical overheads, such as clunky APIs, or slow feedback loops.
Aim to reduce friction and toil, the little frustrations that break momentum and make work feel like a slog. Long build and test times are common culprits. At Gradle, the team is particularly interested in improving the reliability of tests by giving developers the right tools to understand intermittent failures. For example, a dashboard of flaky tests, and a visual history of test and build failures. This matters, because we often underestimate how not fun it is when our tests are flaky. If a test sometimes passes and sometimes fails, but with no discernible pattern, it distracts us from the fun parts of coding and sends us down a rabbit-hole of difficult troubleshooting.
Most developers love developing. Frustration happens when we’re kept from doing the development that we love. Whether it’s clunky internal tools, redundant processes, or low-value meetings, the result is the same: more time on un-fun tedium translates to less time coding, less joy, and less actual output. People want to achieve things, so if it’s un-fun, that’s a really good indicator that it’s probably a waste of time. Even small tools that remove friction like the live reload experience in Quarkus make a noticeable impact on flow and happiness.
Automate Away The Un-Fun
Good tools remove friction. Great tools make room for joy. Whether it’s using a build cache or accelerating tests, automation gives devs the breathing room to focus on more meaningful (and fun) tasks. If you find you’re spending time on repetitive tasks and no automation tool exists to handle them, invent one. Camille Fournier says in her book The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change:
“We engineers automate so that we can focus on the fun stuff – and the fun stuff is the work that uses most of your brain”.
Using more of your brain is valuable, but so (strangely) is using less of your brain. How does that work? We’ll explain.
Embracing Dead Time
One of the most counterintuitive productivity tips is … Do nothing (… Really? Really). When we’re stuck on a problem, we’ll often bang our head against the code until midnight, without getting anywhere. Then in the morning, suddenly it takes five minutes for the solution to click into place. A good night’s sleep is the best debugging tool, but why? What happens? This is the default mode network at work. The default mode network is a set of connections in your brain that activates when you’re truly idle. This network is responsible for many vital brain functions, including creativity and complex problem-solving.
Instead of filling every spare moment with busywork, take proper breaks. Go for a walk. Knit. Garden. “Dead time” in these examples isn’t slacking, it’s deep problem-solving in disguise.
And yes, it is still work. Knowledge workers are still working when they’re at the gym, running, or loading the dishwasher. So don’t guilt-trip yourself for stepping away, and remember to remind your boss of the value of downtime.
AI, and the Art of Better Code
As for AI? It has potential. But tread carefully. In our experience, the code generated by AI often prioritizes volume over quality. When we use AI to generate code, we tend to get a lot of code, and it can be pretty flabby. For example, the code will often have comments which just state the obvious and don’t add anything. The flab can be more subtle, too. Some of the Quarkus libraries, like Hibernate ORM with Panache, have beautifully concise programming models that strip out a lot of boilerplate. AI-generated code will often put all the boilerplate back in.
This bloat isn’t just harmless padding. Reading and understanding all those extra comments and lines of code becomes an active productivity drain for future maintainers.
Think carefully when evaluating the productivity impact of AI. If AI can create 90 percent of what we need, but it takes us longer to get that last 10 percent than if we had done it ourselves, it’s not actually a productivity tool, is it?
AI will still help, but we need to use it more effectively. We need to invest time in setting up the tools to use your organization’s specific best practices and design patterns. Using AI is also an investment in ourselves, because using AI well is a skill. Better AI use means better prompts, better training data and fine tuning, and better human judgment about what “good code” really looks like. The future of work will likely require developers to spend more time reviewing code than writing it. Reading code is a learnable skill, so we suggest investing in being able to do this well (and efficiently!).
When we adopt AI into our workflows, we need to think about what our ‘true’ goal is. Is the aim to write more code, because somewhere there’s a productivity metric which says more code is better, or is the aim to write better code?
Measuring What Matters
Why do we sometimes find it difficult to know if AI-generated code is helping or hindering us? Measuring developer productivity is HARD. Traditional metrics focus on visible activity, like lines of code or number of commits. They’re easy to game and only sometimes reflect the actual value created. Frameworks like SPACE emphasize a more balanced view, and include Satisfaction, Performance, Collaboration, Efficiency and flow, as well as the traditional Activity metrics.
When measuring developer productivity, we recommend doing threat modelling as the first step. When this metric becomes a target, will it still be useful? What behaviors will we see when (not if!) this metric is gamed?
Next, figure out what problem we’re really trying to solve. Is the goal of this measurement to identify low performers and weed them out? Is it to try and create incentives for everyone to work harder? (If it’s either of these, refer back to the threat modeling discussion.) Are we doing the measurement just because we feel we have to have numbers, in order to look like we’re in control?
If the goal is just to have numbers, which won’t really be used for decision-making, perhaps choose a low-cost metric! Or maybe the goal is to identify sources of friction and eliminate them. This is a great goal, but producing metrics of developer productivity may not be the best way to achieve it.
In other words: don’t just measure for the sake of measuring. Know what problem you’re solving and make sure your metrics don’t create new ones.
Final Advice: Fix the Friction, Play More, Work Smarter
Want to be a happier, more productive developer? Get intentional.
Start by tracking what you’re really spending time on, not just the things that are easy to measure, or feel productive. Identify the friction points and look for better tools or processes. Continuous improvement isn’t something you need permission for. Find the papercuts and the friction and fix them. It’s easy to get sucked into living with bad tools, and tedious processes, but these problems can be fixed! There are good tools out there. There are lean techniques for software development. They’re better and they feel better.
And finally, make time for creative play, not just task completion. It’s easier said than done, but try not to cram your day full of ‘productive’ things, just because you think you’re supposed to. Do you have dead time between meetings? Use that time for creative play and you’ll be amazed how much you can achieve.
Research on developer joy and productivity:
- Bellet, Clement and De Neve, Jan-Emmanuel and Ward, George, Does Employee Happiness have an Impact on Productivity? (October 14, 2019). Saïd Business School WP 2019-13
- Oswald, Andrew J., Proto, Eugenio and Sgroi, Daniel (2015) Happiness and productivity. Journal of Labor Economics, 33 (4). pp. 789-822. doi:10.1086/681096
- Shawn Achor, Positive Intelligence, Jan-Feb 2012
Another Rust Rewrite: OpenAI’s Codex CLI Goes Native, Drops Node and TypeScript for Rust

MMS • Bruno Couriol

OpenAI recently announced rewriting its Codex CLI in Rust. Codex CLI stack originally features React, TypeScript and Node. The rewrite seeks security and performance gains on top of improved developer experience.
The announcement explains the motivation for the rewrite as follows:
Our goal is to make the software pieces as efficient as possible and there were a few areas we wanted to improve:
- Zero-dependency Install — currently Node v22+ is required, which is frustrating or a blocker for some users
- Native Security Bindings — surprise! We already ship a Rust for Linux sandboxing since the bindings were available
- Optimized Performance — no runtime garbage collection, resulting in lower memory consumption
- Extensible Protocol — we’ve been working on a “wire protocol” for Codex CLI to allow developers to extend the agent in different languages (including Type/JavaScript, Python, etc) and MCPs (already supported in Rust)
Rust is a system language that prioritizes performance, memory usage, reliability, and resource consumption as design goals. Rust’s rich type system and ownership model guarantee memory safety and thread safety — thus eliminating many classes of bugs at compile-time. On the downside, Microsoft (which mandated the use of Rust for new developments that do not require garbage collection) developers reported a steep initial learning curve, and the reliance on some non-stabilized Rust features. While there are no further details at the moment, the ability to extend the Codex CLI with languages with a larger developer base such as JavaScript and Python will be key to community contributions.
Codex CLI’s Rust version is ongoing. The team continues work on the original TypeScript version in parallel to fix vulnerabilities until the Rust version reaches parity in terms of experience and functionality. Developers can try the new version as follows:
npm i -g @openai/codex@native
codex
Rust rewrites news are becoming commonplace, in particular for tooling in search of performance gains. Microsoft itself recently announced porting the TypeScript compiler to Rust with 10x performance improvement. There is additionally ongoing research to use Rust for safety-critical environments such as space onboard systems.
In the words of OpenAI, Codex is a cloud-based software engineering agent that can work on many tasks in parallel. Codex can perform tasks such as writing features, answering questions about a codebase, fixing bugs, and proposing pull requests for review; with each task running in its own sandbox environment.
Codex CLI is open source on GitHub and runs on MacOS, Linux, or Windows via WSL (Windows Subsystem for Linux).

MMS • RSS
Key Highlights:
- Analysts predict significant growth for MongoDB (MDB, Financial) in the upcoming Q1 earnings report.
- The average analyst 12-month price target suggests a notable upside from the current share price.
- GuruFocus values imply substantial potential gains, estimating MDB’s fair value significantly higher than current levels.
Upcoming Earnings Snapshot
MongoDB (MDB) is poised to release its Q1 earnings results on June 4th, following market closure. Expectations from analysts are set high, with a projected 29.4% increase in EPS to $0.66 and a 17.1% growth in revenue to $527.48 million. This financial growth trend follows MongoDB’s streak of outperforming market estimates over the last two years.
Wall Street’s Outlook on MongoDB
Wall Street remains optimistic about MongoDB’s future, as evidenced by the average target price of $263.67 from 34 analysts, showcasing a potential upside of 36.09% compared to the current trading price of $193.75. The high estimate reaches $520.00, and the low estimate stands at $160.00. For more comprehensive estimates, visit the MongoDB Inc (MDB, Financial) Forecast page.
The consensus from 37 brokerage firms further underscores confidence in MongoDB, with an average recommendation of 2.0, indicating an “Outperform” status. This rating aligns with a scale where 1 represents a Strong Buy and 5 signifies a Sell.
Valuation Insights from GuruFocus
According to GuruFocus’ valuation metrics, the estimated GF Value for MongoDB is $444.25, suggesting an impressive upside of 129.29% from its current price of $193.75. The GF Value represents GuruFocus’ assessment of the fair market value of the stock, determined by analyzing historical trading multiples, business growth benchmarks, and projected business performance. For more in-depth analysis, please refer to the MongoDB Inc (MDB, Financial) Summary page.

MMS • RSS

Artificial intelligence is the greatest investment opportunity of our lifetime. The time to invest in groundbreaking AI is now, and this stock is a steal!
AI is eating the world—and the machines behind it are ravenous.
Each ChatGPT query, each model update, each robotic breakthrough consumes massive amounts of energy. In fact, AI is already pushing global power grids to the brink.
Wall Street is pouring hundreds of billions into artificial intelligence—training smarter chatbots, automating industries, and building the digital future. But there’s one urgent question few are asking:
Where will all of that energy come from?
AI is the most electricity-hungry technology ever invented. Each data center powering large language models like ChatGPT consumes as much energy as a small city. And it’s about to get worse.
Even Sam Altman, the founder of OpenAI, issued a stark warning:
“The future of AI depends on an energy breakthrough.”
Elon Musk was even more blunt:
“AI will run out of electricity by next year.”
As the world chases faster, smarter machines, a hidden crisis is emerging behind the scenes. Power grids are strained. Electricity prices are rising. Utilities are scrambling to expand capacity.
And that’s where the real opportunity lies…
One little-known company—almost entirely overlooked by most AI investors—could be the ultimate backdoor play. It’s not a chipmaker. It’s not a cloud platform. But it might be the most important AI stock in the US owns critical energy infrastructure assets positioned to feed the coming AI energy spike.
As demand from AI data centers explodes, this company is gearing up to profit from the most valuable commodity in the digital age: electricity.
The “Toll Booth” Operator of the AI Energy Boom
- It owns critical nuclear energy infrastructure assets, positioning it at the heart of America’s next-generation power strategy.
- It’s one of the only global companies capable of executing large-scale, complex EPC (engineering, procurement, and construction) projects across oil, gas, renewable fuels, and industrial infrastructure.
- It plays a pivotal role in U.S. LNG exportation—a sector about to explode under President Trump’s renewed “America First” energy doctrine.
Trump has made it clear: Europe and U.S. allies must buy American LNG.
And our company sits in the toll booth—collecting fees on every drop exported.
But that’s not all…
As Trump’s proposed tariffs push American manufacturers to bring their operations back home, this company will be first in line to rebuild, retrofit, and reengineer those facilities.
AI. Energy. Tariffs. Onshoring. This One Company Ties It All Together.
While the world is distracted by flashy AI tickers, a few smart investors are quietly scooping up shares of the one company powering it all from behind the scenes.
AI needs energy. Energy needs infrastructure.
And infrastructure needs a builder with experience, scale, and execution.
This company has its finger in every pie—and Wall Street is just starting to notice.
Wall Street is noticing this company also because it is quietly riding all of these tailwinds—without the sky-high valuation.
While most energy and utility firms are buried under mountains of debt and coughing up hefty interest payments just to appease bondholders…
This company is completely debt-free.
In fact, it’s sitting on a war chest of cash—equal to nearly one-third of its entire market cap.
It also owns a huge equity stake in another red-hot AI play, giving investors indirect exposure to multiple AI growth engines without paying a premium.
And here’s what the smart money has started whispering…
The Hedge Fund Secret That’s Starting to Leak Out
This stock is so off-the-radar, so absurdly undervalued, that some of the most secretive hedge fund managers in the world have begun pitching it at closed-door investment summits.
They’re sharing it quietly, away from the cameras, to rooms full of ultra-wealthy clients.
Why? Because excluding cash and investments, this company is trading at less than 7 times earnings.
And that’s for a business tied to:
- The AI infrastructure supercycle
- The onshoring boom driven by Trump-era tariffs
- A surge in U.S. LNG exports
- And a unique footprint in nuclear energy—the future of clean, reliable power
You simply won’t find another AI and energy stock this cheap… with this much upside.
This isn’t a hype stock. It’s not riding on hope.
It’s delivering real cash flows, owns critical infrastructure, and holds stakes in other major growth stories.
This is your chance to get in before the rockets take off!
Disruption is the New Name of the Game: Let’s face it, complacency breeds stagnation.
AI is the ultimate disruptor, and it’s shaking the foundations of traditional industries.
The companies that embrace AI will thrive, while the dinosaurs clinging to outdated methods will be left in the dust.
As an investor, you want to be on the side of the winners, and AI is the winning ticket.
The Talent Pool is Overflowing: The world’s brightest minds are flocking to AI.
From computer scientists to mathematicians, the next generation of innovators is pouring its energy into this field.
This influx of talent guarantees a constant stream of groundbreaking ideas and rapid advancements.
By investing in AI, you’re essentially backing the future.
The future is powered by artificial intelligence, and the time to invest is NOW.
Don’t be a spectator in this technological revolution.
Dive into the AI gold rush and watch your portfolio soar alongside the brightest minds of our generation.
This isn’t just about making money – it’s about being part of the future.
So, buckle up and get ready for the ride of your investment life!
Act Now and Unlock a Potential 100+% Return within 12 to 24 months.
We’re now offering month-to-month subscriptions with no commitments.
For a ridiculously low price of just $9.99 per month, you can unlock our in-depth investment research and exclusive insights – that’s less than a single fast food meal!
Here’s why this is a deal you can’t afford to pass up:
- Access to our Detailed Report on our AI, Tariffs, and Nuclear Energy Stock with 100+% potential upside within 12 to 24 months
- BONUS REPORT on our #1 AI-Robotics Stock with 10000% upside potential: Our in-depth report dives deep into our #1 AI/robotics stock’s groundbreaking technology and massive growth potential.
- One New Issue of Our Premium Readership Newsletter: You will also receive one new issue per month and at least one new stock pick per month from our monthly newsletter’s portfolio over the next 12 months. These stocks are handpicked by our research director, Dr. Inan Dogan.
- One free upcoming issue of our 70+ page Quarterly Newsletter: A value of $149
- Bonus Content: Premium access to members-only fund manager video interviews
- Ad-Free Browsing: Enjoy a month of investment research free from distracting banner and pop-up ads, allowing you to focus on uncovering the next big opportunity.
- Lifetime Price Guarantee: Your renewal rate will always remain the same as long as your subscription is active.
- 30-Day Money-Back Guarantee: If you’re not absolutely satisfied with our service, we’ll provide a full refund within 30 days, no questions asked.
Space is Limited! Only 1000 spots are available for this exclusive offer. Don’t let this chance slip away – subscribe to our Premium Readership Newsletter today and unlock the potential for a life-changing investment.
Here’s what to do next:
1. Head over to our website and subscribe to our Premium Readership Newsletter for just $9.99.
2. Enjoy a month of ad-free browsing, exclusive access to our in-depth report on the Trump tariff and nuclear energy company as well as the revolutionary AI-robotics company, and the upcoming issues of our Premium Readership Newsletter.
3. Sit back, relax, and know that you’re backed by our ironclad 30-day money-back guarantee.
Don’t miss out on this incredible opportunity! Subscribe now and take control of your AI investment future!
No worries about auto-renewals! Our 30-Day Money-Back Guarantee applies whether you’re joining us for the first time or renewing your subscription a month later!

MMS • RSS
The NoSQL Database Market Report by The Business Research Company delivers a detailed market assessment, covering size projections from 2025 to 2034. This report explores crucial market trends, major drivers and market segmentation by [key segment categories].
What Is the NoSQL Database Market Size and Projected Growth Rate?
The NoSQL database market size will grow from $11.6 billion in 2024 to $15.59 billion in 2025 at a compound annual growth rate (CAGR) of 34.4%. The growth in the historic period is attributed to increased data volume, need for scalable solutions, limitations of traditional databases, rise of big data analytics, growth of web and mobile applications, demand for flexible data models, and advancements in cloud computing.
The NoSQL database market size is expected to grow to $50.39 billion by 2029, at a CAGR of 34.1%. Growth is driven by the rise of IoT devices, AI, real-time data processing, distributed systems, e-commerce platforms, data security needs, and database technology advances. Trends include multi-model databases, database automation, machine learning integration, edge computing, data privacy, and databa
Purchase the full report for exclusive industry analysis:
https://www.thebusinessresearchcompany.com/purchaseoptions.aspx?id=19616
What Are the Major Segments in the NoSQL Database Market?
The NoSQL databasemarket covered in this report is segmented –
1) By Type: Key-Value Store, Document Database, Column Based Store, Graph Database
2) By Organization Size: Small And Medium Enterprises, Large Enterprises
3) By Application: Data Storage, Mobile Apps, Web Apps, Data Analytics, Other Applications
4) By Industry Vertical: Banking, Financial Services, And Insurance (BFSI), Retail And E- Commerce, Healthcare And Life Sciences, Government And Public Sector, Telecom And Information Technology (IT), Manufacturing
Subsegments:
1) By Key-Value Store: Distributed Key-Value Stores, In-Memory Key-Value Stores, Persistent Key-Value Stores, Caching Solutions
2) By Document Database: Schema-Free Document Databases, Self-Describing Document Databases, Multi-Model Document Databases, Search-Optimized Document Databases
3) By Column-Based Store: Wide-Column Stores, Time-Series Column Stores, Column Family Stores, Distributed Column Stores
4) By Graph Database: Property Graph Databases, RDF Graph Databases, Multi-Model Graph Databases, Graph Analytics Platforms
Get your free sample here:
https://www.thebusinessresearchcompany.com/sample.aspx?id=19616&type=smp
What Are The Driving NoSQL Database Market Evolution?
The increased demand for online gaming and multimedia consumption is expected to boost the growth of the NoSQL database market. The rise in online gaming and digital entertainment is driven by technological advancements and widespread access to high-speed internet. NoSQL databases are crucial for managing large amounts of unstructured data, supporting high-performance queries, and enabling real-time interactions, enhancing the user experience. For example, a study by the Office of Communications reported that individuals in the UK spent an average of seven and a half hours per week on online gaming. Thus, the surge in online gaming and multimedia consumption is propelling the NoSQL database market.
Which Firms Dominate The NoSQL Database Market Segments?
Major companies operating in the NoSQL database market are Google LLC, Microsoft Corporation, Amazon Web Services Inc., International Business Machines Corporation, Oracle Corporation, SAP SE, Hewlett Packard Enterprise (HPE), Databricks Inc., MongoDB Inc., Elastic NV, The Apache Software Foundation, Redis Labs Ltd., Neo4j Inc., DataStax Inc., Couchbase Inc., InfluxData Inc., Aerospike Inc., MapR Technologies Inc., TigerGraph Inc., InfiniteGraph Inc., Basho Technologies Inc., VoltDB Inc., OrientDB Inc., Fauna Inc., NuoDB Inc., RavenDB Ltd.
What Trends Are Driving Growth in The NoSQL Database Market?
Major Companies in the NoSQL database market are forming strategic partnerships to improve technology integration and expand market presence. A strategic partnership typically involves collaboration between two or more Major Companies to combine resources, expertise, and efforts towards common goals. For example, Taashee Linux Services Private Limited (TLSPL), an India-based technology firm, partnered with RavenDB, an Israel-based NoSQL document database company, in December 2022. This partnership enables Taashee’s clients to access RavenDB’s advanced features, such as high-performance ACID-compliant transactions, auto-indexing, and full-text search capabilities, enhancing their ability to provide tailored, scalable database solutions.
Get the full report for exclusive industry analysis:
https://www.thebusinessresearchcompany.com/report/nosql-database-global-market-report
Which Is The Largest Region In The NoSQL Database Market?
North America was the largest region in the NoSQL database market in 2024. The regions covered in the NoSQL database market report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, Middle East, Africa.
Frequently Asked Questions:
1. What Is the Market Size and Growth Rate of the NoSQL Database Market?
2. What is the CAGR expected in the NoSQL Database Market?
3. What Are the Key Innovations Transforming the NoSQL Database Industry?
4. Which Region Is Leading the NoSQL Database Market?
Why This Report Matters:
Competitive overview: This report analyzes the competitive landscape of the 3D imaging software market, evaluating key players on market share, revenue, and growth factors.
Informed Decisions: Understand key strategies related to products, segmentation, and industry trends.
Efficient Research: Quickly identify market growth, leading players, and major segments.
Connect with us on:
LinkedIn: https://in.linkedin.com/company/the-business-research-company,
Twitter: https://twitter.com/tbrc_info,
YouTube: https://www.youtube.com/channel/UC24_fI0rV8cR5DxlCpgmyFQ.
Contact Us
Europe: +44 207 1930 708,
Asia: +91 88972 63534,
Americas: +1 315 623 0293 or
Email: mailto:info@tbrc.info
Learn More About The Business Research Company
With over 15,000+ reports from 27 industries covering 60+ geographies, The Business Research Company has built a reputation for offering comprehensive, data-rich research and insights. Our flagship product, the Global Market Model delivers comprehensive and updated forecasts to support informed decision-making.
This release was published on openPR.