Encrypting Data to Meet Global Privacy Law Requirements – IT Security News

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

As organizations navigate an increasingly complex patchwork of privacy regulations worldwide, encryption has emerged as a critical tool for compliance while protecting sensitive data from unauthorized access. Despite varying requirements across different jurisdictions, encryption provides a technical foundation that addresses core principles common to most global privacy frameworks. Divergent Encryption Requirements Across Major Privacy Laws […]

The post Encrypting Data to Meet Global Privacy Law Requirements appeared first on Cyber Security News.

This article has been indexed from Cyber Security News

Read the original article:

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Aerospike is Powering Enterprise Ops with AI, Vector Search, and Real-Time Data

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Srini V. Srinivasan is the Co-founder and Chief Technology Officer of Aerospike, and a recognized database pioneer from Silicon Valley. With over 20 years of experience building and running high-scale infrastructure systems, Srini brings deep technical expertise in distributed systems, real-time-data, and database architecture. He holds more than a dozen patents spanning web, mobile, and data technologies. Before founding Aerospike, he led engineering teams at Yahoo, where he encountered the scale challenges that would inspire Aerospike’s creation. Srini holds a B.Tech in Computer Science from IIT Madras, and an M.S. and Ph.D. in Computer Science from the University of Wisconsin-Madison.

Advertisment

In this exclusive virtual interview with DATAQUEST, Srini Srinivasan discusses the transformative role of vector databases, the challenges of scaling AI at real-time speed, the evolving architecture of data systems, and Aerospike’s deepening footprint in India’s digital economy. Drawing from rich experience in high-scale infrastructure, Srini offers sharp insights into how real-time data, NoSQL, and AI-driven innovation are converging to define the next chapter of enterprise technology. Excerpts.

Vector databases are gaining traction for scaling AI systems. How do you see them transforming real-time data handling, and what’s Aerospike’s role here?

Vector search is a key component in AI-powered search. For instance, in image or recommendation searches, you generate a vector for a query item and compare it to existing vectors to find the closest matches. This applies to various domains like security, person identification, and related text search. The ability to combine vector search with Retrieval-Augmented Generation (RAG), graph, and key-value search creates powerful, intelligent applications. Aerospike’s role is to offer a high-performance database infrastructure that supports these hybrid search techniques at scale. 

Advertisment

Given the data heterogeneity across SQL, NoSQL, and legacy systems, how do enterprises unify their data for real-time insights?

Many of our customers, like Airtel and Flipkart, operate at massive scale and rely on real-time insights. They have data across multiple legacy and modern systems. A common pattern we see is streaming data from diverse systems using tools like Kafka into Aerospike. This enables a real-time, unified customer 360 view. Over time, as confidence in Aerospike grows, some customers even make it their system of record. However, we often coexist with other databases, allowing seamless data aggregation without full migration.

Enterprises are struggling to scale AI projects and align them with business goals. How does Aerospike support this evolution?

Advertisment

We’ve supported AI/ML use cases for over a decade. Early applications included fraud detection and real-time recommendations. These require low-latency access to high-volume data. With today’s explosion in AI, including generative and agentic models, the challenge is integrating ever-changing real-time data. While tools like ChatGPT handle static content well, incorporating dynamic data streams into AI-driven decision-making is still evolving. We’re actively working with customers to bridge this gap, ensuring ROI from real-time AI implementations.

Analysts say AI is reshaping data architectures. What are the trends you’re seeing, and how is Aerospike adapting?

We see a strong shift across industries toward real-time analytics: fraud detection, threat assessment, recommendations, etc. A major trend is combining different data models—vector, graph, text, in a unified system. Aerospike’s platform supports this convergence. Our ability to ingest massive volumes of streaming data and execute real-time queries allows companies to make instant decisions. The traditional limits of how much data can be processed in real time have been pushed back significantly.

Advertisment

Do you think batch processing is fading. Do you agree?

Absolutely. Historically, batch existed because source or target systems couldn’t handle real-time loads. We’ve seen this evolution in new age tech companies in gaming, adtech and fintech, and now it’s reaching traditional enterprises. Real-time processing has become a necessity for delivering timely, personalized services at scale. Batch won’t disappear overnight, but all new systems are gravitating toward streaming and event-driven models.

How is Aerospike positioned in India, especially with the real-time push from UPI and digital ecosystems?

Advertisment

Aerospike has been part of India’s digital growth since 2011. Companies like Flipkart, Snapdeal, Airtel, and Dream11 use Aerospike for real-time personalization, payments, and customer insights. For example, during Flipkart’s Big Billion Day, we handled 95 million transactions per second, supporting personalized experiences for 450 million users. Through partners like Mindgate, we also support hundreds of millions of UPI transactions daily. India is at the forefront of real-time, consumer-grade workloads, and Aerospike is deeply embedded in this ecosystem.

What’s your outlook on NoSQL as AI continues to mature? And where is Aerospike focusing in the next few years?

NoSQL is well-suited for AI because of its flexibility and scalability. Aerospike is focused on three priorities: first, being fully cloud-native and easy to consume on all major clouds; second, advancing vector search and graph capabilities relevant to AI; and third, strengthening community engagement. We aim to support AI strategies with high throughput, low latency, and large-scale data ingestion, especially for regions like India that operate at massive consumer scale. Our roadmap includes deeper AI integration, real-time data models, and continued leadership in performance-driven architectures.

Advertisment

Also Read:

Elevating Enterprise Data Architectures with Aerospike

Aerospike Sees Massive Growth (51%) Fueled by AI Demands

Advertisment

How Aerospike strategizes with data in the customer experience battleground

Aerospike Secures $109M in Growth Capital from Sumeru Equity Partners 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Releases LMEval, an Open-Source Cross-Provider LLM Evaluation Tool

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

LMEval aims to help AI researchers and developers compare the performance of different large language models. Designed to be accurate, multimodal, and easy to use, LMEval has already been used to evaluate major models in terms of safety and security.

One of the reasons behind LMEval is the fast pace at which new models are being introduced. This makes it essential to evaluate them quickly and reliably to assess their suitability for specific applications, says Google researchers. Among its key features are compatibility with a wide range of LLM providers, incremental benchmark execution for improved efficiency, support for multimodal evaluation—including text, images, and code—and encrypted result storage for enhanced security.

For cross-provider support, it is critical that evaluation benchmarks can be defined once and reused across multiple models, despite differences in their APIs. To this end, LMEval uses LiteLLM, a framework that allows developers to use the OpenAI API format to call a variety of LLM providers, including Bedrock, Hugging Face, Vertex AI, Together AI, Azure, OpenAI, Groq, and others. LiteLLM translates inputs to match each provider’s specific requirements for completion, embedding, and image generation endpoints, and produces a uniform output format.

To improve execution efficiency when new models are released, LMEval runs only the evaluations that are strictly necessary, whether for new models, prompts, or questions. This is made possible by an intelligent evaluation engine that follows an incremental evaluation model.

Written in Python and available on GitHub, LMEval requires you to follow a series of steps to run an evaluation. First, you define your benchmark by specifying the tasks to execute, e.g., detect eye colors in a picture, along with the prompt, the image, and the expected results. Then, you list the models to evaluate and run the benchmark:

benchmark = Benchmark(name='Cat Visual Questions',
                      description='Ask questions about cats picture')

...

scorer = get_scorer(ScorerType.contain_text_insensitive)
task = Task(name='Eyes color', type=TaskType.text_generation, scorer=scorer)
category.add_task(task)

# add questions
source = QuestionSource(name='cookbook')
# cat 1 question - create question then add media image
question = Question(id=0, question='what is the colors of eye?',
                    answer='blue', source=source)
question.add_media('./data/media/cat_blue.jpg')
task.add_question(question)

...

# evaluate benchmark on two models
models = [GeminiModel(), GeminiModel(model_version='gemini-1.5-pro')]

prompt = SingleWordAnswerPrompt()
evaluator = Evaluator(benchmark)
eval_plan = evaluator.plan(models, prompt)  # plan evaluation
completed_benchmark = evaluator.execute()  # run evaluation

Optionally, you can save the evaluation results to a SQLite database and export the data to pandas for further analysis and visualization. LMEval uses encryption to store benchmark data and evaluation results to protect against crawling or indexing.

LMEval also includes LMEvalboard, a visual dashboard that lets you view overall performance, analyze individual models, or compare multiple models.

As mentioned, LMEval has been used to create the Phare LLM Benchmark, designed to evaluate LLM safety and security, including resistance to hallucination, factual accuracy, bias, and potential harm.

LMEval is not the only cross-provider LLM evaluation framework currently available. Others include Harbor Bench and EleutherAI’s LM Evaluation Harness. Harbor Bench, limited to text prompts, has the interesting feature of using an LLM to judge result quality. In contrast, EleutherAI’s LM Evaluation Harness includes over 60 benchmarks and allows users to define new ones using YAML.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Acquired by Red Spruce Capital LLC – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Red Spruce Capital LLC lifted its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 51.3% in the 1st quarter, according to its most recent 13F filing with the Securities and Exchange Commission. The fund owned 12,107 shares of the company’s stock after buying an additional 4,104 shares during the quarter. Red Spruce Capital LLC’s holdings in MongoDB were worth $2,124,000 at the end of the most recent reporting period.

Several other hedge funds also recently made changes to their positions in MDB. HighTower Advisors LLC raised its stake in shares of MongoDB by 2.0% during the fourth quarter. HighTower Advisors LLC now owns 18,773 shares of the company’s stock worth $4,371,000 after purchasing an additional 372 shares during the last quarter. Jones Financial Companies Lllp increased its stake in MongoDB by 68.0% in the 4th quarter. Jones Financial Companies Lllp now owns 1,020 shares of the company’s stock valued at $237,000 after buying an additional 413 shares during the period. Smartleaf Asset Management LLC increased its stake in MongoDB by 56.8% in the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock valued at $87,000 after buying an additional 134 shares during the period. Clear Creek Financial Management LLC purchased a new position in MongoDB during the 4th quarter worth $295,000. Finally, Steward Partners Investment Advisory LLC lifted its stake in shares of MongoDB by 12.9% in the 4th quarter. Steward Partners Investment Advisory LLC now owns 1,168 shares of the company’s stock worth $272,000 after acquiring an additional 133 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.

Insider Buying and Selling at MongoDB

In other news, Director Dwight A. Merriman sold 3,000 shares of the company’s stock in a transaction that occurred on Monday, March 3rd. The shares were sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the sale, the director now owns 1,109,006 shares of the company’s stock, valued at approximately $300,130,293.78. This represents a 0.27% decrease in their position. The sale was disclosed in a legal filing with the SEC, which is accessible through this hyperlink. Also, insider Cedric Pech sold 1,690 shares of the firm’s stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $292,809.40. Following the completion of the sale, the insider now directly owns 57,634 shares in the company, valued at $9,985,666.84. This trade represents a 2.85% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold a total of 25,203 shares of company stock worth $4,660,459 over the last ninety days. Insiders own 3.60% of the company’s stock.

MongoDB Price Performance

Shares of MongoDB stock opened at $189.36 on Friday. The company has a market capitalization of $15.37 billion, a P/E ratio of -69.11 and a beta of 1.49. The business has a 50-day moving average of $174.52 and a two-hundred day moving average of $234.21. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm had revenue of $548.40 million for the quarter, compared to analyst estimates of $519.65 million. During the same period in the prior year, the company earned $0.86 earnings per share. On average, equities analysts anticipate that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Analyst Ratings Changes

Several analysts have recently commented on the company. Robert W. Baird reduced their price target on MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a report on Thursday, March 6th. Redburn Atlantic raised MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 target price for the company in a report on Thursday, April 17th. Rosenblatt Securities reaffirmed a “buy” rating and set a $350.00 price target on shares of MongoDB in a research note on Tuesday, March 4th. Stifel Nicolaus reduced their price objective on shares of MongoDB from $340.00 to $275.00 and set a “buy” rating for the company in a research note on Friday, April 11th. Finally, Daiwa Capital Markets began coverage on shares of MongoDB in a report on Tuesday, April 1st. They set an “outperform” rating and a $202.00 price objective on the stock. Nine research analysts have rated the stock with a hold rating, twenty-three have assigned a buy rating and one has given a strong buy rating to the company’s stock. According to data from MarketBeat.com, the stock presently has a consensus rating of “Moderate Buy” and an average target price of $286.88.

Get Our Latest Report on MongoDB

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

2025 Gold Forecast: A Perfect Storm for Demand Cover

Unlock the timeless value of gold with our exclusive 2025 Gold Forecasting Report. Explore why gold remains the ultimate investment for safeguarding wealth against inflation, economic shifts, and global uncertainties. Whether you’re planning for future generations or seeking a reliable asset in turbulent times, this report is your essential guide to making informed decisions.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


111 Capital Makes New Investment in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

111 Capital acquired a new position in MongoDB, Inc. (NASDAQ:MDBFree Report) during the 4th quarter, according to the company in its most recent 13F filing with the Securities and Exchange Commission. The institutional investor acquired 1,675 shares of the company’s stock, valued at approximately $390,000.

Other hedge funds have also bought and sold shares of the company. Strategic Investment Solutions Inc. IL acquired a new position in shares of MongoDB during the 4th quarter worth approximately $29,000. NCP Inc. acquired a new position in shares of MongoDB during the 4th quarter worth approximately $35,000. Coppell Advisory Solutions LLC raised its stake in shares of MongoDB by 364.0% during the 4th quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after buying an additional 182 shares during the period. Smartleaf Asset Management LLC raised its stake in shares of MongoDB by 56.8% during the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after buying an additional 134 shares during the period. Finally, Manchester Capital Management LLC raised its stake in shares of MongoDB by 57.4% during the 4th quarter. Manchester Capital Management LLC now owns 384 shares of the company’s stock worth $89,000 after buying an additional 140 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.

Analysts Set New Price Targets

Several research analysts have recently weighed in on the company. Oppenheimer reduced their target price on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Citigroup reduced their target price on MongoDB from $430.00 to $330.00 and set a “buy” rating for the company in a research note on Tuesday, April 1st. Monness Crespi & Hardt raised MongoDB from a “sell” rating to a “neutral” rating in a research note on Monday, March 3rd. Piper Sandler reduced their price objective on MongoDB from $280.00 to $200.00 and set an “overweight” rating for the company in a research note on Wednesday, April 23rd. Finally, UBS Group set a $350.00 target price on MongoDB in a report on Tuesday, March 4th. Nine investment analysts have rated the stock with a hold rating, twenty-three have given a buy rating and one has assigned a strong buy rating to the company’s stock. According to data from MarketBeat, MongoDB currently has an average rating of “Moderate Buy” and a consensus target price of $286.88.

Check Out Our Latest Research Report on MDB

MongoDB Stock Performance

Shares of MDB opened at $189.36 on Friday. The stock has a 50-day simple moving average of $174.52 and a 200-day simple moving average of $234.21. The stock has a market capitalization of $15.37 billion, a PE ratio of -69.11 and a beta of 1.49. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $370.00.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the previous year, the business posted $0.86 EPS. Analysts predict that MongoDB, Inc. will post -1.78 EPS for the current year.

Insiders Place Their Bets

In other MongoDB news, CEO Dev Ittycheria sold 18,512 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $3,207,389.12. Following the transaction, the chief executive officer now owns 268,948 shares of the company’s stock, valued at $46,597,930.48. The trade was a 6.44% decrease in their ownership of the stock. The transaction was disclosed in a filing with the SEC, which is available at this hyperlink. Also, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now directly owns 14,598 shares in the company, valued at approximately $2,529,103.50. This trade represents a 2.02% decrease in their position. The disclosure for this sale can be found here. Insiders have sold a total of 25,203 shares of company stock valued at $4,660,459 over the last quarter. Company insiders own 3.60% of the company’s stock.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Ten Starter Stocks For Beginners to Buy Now Cover

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenSearch 3.0 Now Generally Available, with a Focus on Vector Database Performance and Scalability

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

The OpenSearch Software Foundation has announced the general availability of OpenSearch 3.0, the first major release in three years and the first since the project joined the Linux Foundation. This version introduces native support for the Model Context Protocol (MCP), along with pull-based data ingestion and gRPC support, aimed at improving scalability and integration.

OpenSearch was launched in 2021 by AWS as a fork of Elasticsearch 7.10, following Elastic’s license change. With performance as a key focus of this release, OpenSearch 3.0 delivers up to 9.5x faster vector search compared to version 1.3, thanks to support for GPU acceleration and more efficient indexing.

OpenSearch 3.0 upgrades to Apache Lucene 10 and introduces enhancements to data ingestion, transport, and management. James McIntyre, senior product marketing manager at AWS, Saurabh Singh, engineering leader at AWS, and Jiaxiang (Peter) Zhu, senior system development engineer at AWS, explain:

The latest version of Apache Lucene offers significant improvements in performance, efficiency, and vector search functionality. These types of improvements pave the way for larger vector and search deployments, enabling AI workloads to scale factorially over time.

Lucene 10 introduces improvements in both I/O and search parallelism, and requires JVM version 21 or later—resulting in some breaking changes and prompting a major version update. Elasticsearch, which reverted to an open source model under the AGPL license last year, recently released version 9.0.0-rc1, which also supports the latest version of Lucene.

The latest OpenSearch release also adds support for gRPC and pull-based ingestion, and introduces reader-writer separation. This allows indexing and search workloads to be configured independently, ensuring consistent, high-performance operation for each. McIntyre, Singh, and Zhu add:

Benefiting from underlying HTTP/2 infrastructure, gRPC supports multiplexing and bidirectional data streams, enabling clients to send and receive requests concurrently over the same TCP connection. Performance gains can be especially pronounced for users working with large and complex queries, where the overhead of deserializing requests can compound when using JSON.

OpenSearch now also supports index type detection and integrates the dynamic data management framework Apache Calcite, enabling iterative query building and exploration. This is achieved by incorporating the query builder into OpenSearch SQL and PPL. In a popular thread on Hacker News, Joe Johnston writes:

Elastic still has the edge on features. Especially Kibana has a lot more features than Amazon’s fork (…) A lot of my consulting clients seem to prefer Opensearch lately. That’s mainly because of the less complicated licensing and the AWS support.

Comparing OpenSearch and Elasticsearch, user Macha adds:

One thing that Opensearch misses that would have been very nice to have on a recent project is enrich processors.

OpenSearch is open source under the Apache 2.0 license. More details about the latest release are available in the release notes on GitHub.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Self-Healing Infrastructure Could Be the Future of Data Management | HackerNoon

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

“Data is the new oil, but managing it is a whole different game.” — Clive Humby, Data Science Expert

With the growing reliance on large and complex data environments, effectively managing these systems has become a critical business imperative. During my experience in database management, particularly in high-stress sectors like healthcare and finance, I witnessed firsthand how the pace of technological change, the increasing levels of data, and the corresponding process risk overwhelmed traditional manual mechanisms to the breaking point. Traditional mechanisms of database management, which relied on intuition, observation, and manual adjustment, were insufficient.

Being a Senior MongoDB Administrator with over six years of experience, my career began with performance tuning, high availability, and disaster recovery process implementation across multiple platforms, including on-premises setups and cloud services like AWS and Azure. As systems grew exponentially large and complex, especially in mission-critical setups, it became more and more apparent that the future strategy was to take steps to anticipate issues before they arose proactively.

This awareness led me to move towards automated and AI-based solutions. I began by incorporating Python scripts to perform repetitive tasks such as backup verification, index tuning, and capacity planning. Gradually, this progressed to the implementation of AI-based anomaly detection and predictive modeling tools, which enabled the transition from reactive troubleshooting to proactive database administration.

At UST, with over 60 MongoDB clusters and multiple PostgreSQL deployments, came the challenge of managing terabytes of operational data on a weekly basis. The size caused problems like replication lag, disk saturation, and backup failure that had to be addressed in a rush. Manual health checks, anomaly identification, and system recovery across geographies were time-consuming and risk-laden. The experience was instructive: to secure, optimize, and make data infrastructures reliable, automation and AI are indispensable.

In this article, I will walk through how automation and AI reshaped the management of mission-critical systems, transforming them from reactive fire-fighting efforts into a predictive, scalable, and more resilient approach.

The Consequences of Undetected Issues

When we first experienced undetected replication lag in one of our healthcare clusters, the consequences were serious. Data consistency was compromised, and dependent applications began showing outdated information to end users. The root of the issue wasn’t a lack of tooling, as we had alerts configured, dashboards in place, and regular log reviews. The problem was scale. Log files were enormous, producing hundreds of megabytes of data per node every day. Threshold-based alerts, designed for simpler systems, fired too frequently to be useful. Important anomalies were buried in noise.

I realized that traditional incident response processes based on manual log reviews and periodic health checks could no longer cope. There were instances where backup failures were not identified for days until we attempted restores. Without predictive capabilities, we were essentially blind, reacting to problems after damage had already been done.

Automating the Fundamentals: From Scripts to Standard Practice

My initial response was to automate the fundamentals. I started with Bash scripts and lightweight Python scripts that captured essential health metrics like CPU usage, memory consumption, disk utilization, replication lag, backup timestamps, and log tail summaries. These scripts were scheduled via cron jobs and configured to compile hourly reports pushed to Slack and email groups. This wasn’t about replacing our monitoring platforms but augmenting them with immediate, actionable intelligence.

What stood out was how quickly these basic automations surfaced recurring problems. Within three months, these scripts flagged over 20 previously undetected replication lag events and several instances of disk pressure that would have otherwise led to outages. These automations flagged issues like replication lag spikes beyond acceptable limits, triggering proactive actions to reshard or reroute workloads. It allowed us to preemptively address anomalies without waiting for incidents to escalate.

Backup validation posed a similar challenge. Backups ran every four to six hours on critical clusters, with logs confirming completion. Yet, a routine restore test exposed silent corruption in a backup chain.

For backup validation, we implemented a Python script that calculates MD5 hashes before and after a backup job:

import hashlib

def calculate_md5(file_path):

 hash_md5 = hashlib.md5()

  with open(file_path, "rb") as f:

     for chunk in iter(lambda: f.read(4096), b""):

        hash_md5.update(chunk)

return hash_md5.hexdigest()

This automation flagged backup discrepancies immediately, helping us surface seven silent failures across critical clusters.

For replication lag monitoring, we used a lightweight Python tool interacting with MongoDB:

from pymongo import MongoClient

client = MongoClient('mongodb://primary_host:27017')

status = client.admin.command("replSetGetStatus")

for member in status['members']:

 print(f"{member['name']} - lag: {member.get('optimeDate')}")

This script helped identify critical replication issues long before conventional alerts were triggered.

Tackling Anomaly Detection with AI

Even as these operational improvements stabilized daily management, traditional tools struggled to keep up. Our infrastructure generates between 500MB to 1GB of logs per cluster per day, too much for any team to review manually. Log parsers and threshold-based alerts remained reactive, notifying us after the fact.

To address this, I built an anomaly detection framework using RandomForest classifiers trained on 18 months of operational incident logs. The model correlated operational metrics such as CPU spikes, memory overuse, replication lag patterns, query error rates, and job delays to identify emerging risks. We used metrics pulled from MongoDB logs and MongoDB Ops Manager monitoring data as model inputs.

Tuning was critical. We deliberately optimized the model for precision to minimize false positives. In production, the system predicted 85% of major incidents three to five hours ahead of conventional tools. More importantly, the AI recommended specific actions. When rising lag is combined with query queue saturation, workload redistribution is advised. Disk pressure trends prompted preemptive capacity expansion.

Critically, these models learned operational patterns. For example, they identified consistent spikes in CPU usage and query response times tied to specific applications and timeframes. One clear pattern flagged by the AI was: “queries from App X consistently cause CPU spikes during the morning load window.” With this insight, we proactively rescheduled heavy reporting jobs and optimized specific collections and indexes, eliminating daily bottlenecks that previously triggered cascading replication lags.

Our operational dataset fed into this model looked like:

from sklearn.ensemble import RandomForestClassifier

import pandas as pd

data = pd.read_csv("metrics_data.csv")

X = data[['cpu','memory','replica_lag','query_errors','job_delays']]

y = data['incident']

model = RandomForestClassifier(n_estimators=100)

model.fit(X, y)

Over 12 months, this AI-driven system reduced our P1 incident volume by 30%, freeing teams to focus on preventive infrastructure design.

Turning Logs into Predictive Signals

Another insight came from applying anomaly scoring models to operational logs in near real-time. AI models processed logs sequentially, flagging error sequences and minor lag patterns that humans consistently missed. We assigned real-time health scores to each cluster by combining anomaly density, CPU/memory utilization trends, replica set performance, and query backlogs into a dynamic risk scoring engine.

For instance:

def calculate_risk_score(log_anomalies, cpu, memory, lag, query_failures):

score = (log_anomalies * 0.4) + (cpu * 0.2) + (memory * 0.2) + (lag * 0.1) + (query_failures * 0.1)

return score

As cluster scores crossed risk thresholds, automated playbooks proposed mitigation actions, moving workloads to healthy replicas, triggering shard balancing, isolating high-risk nodes, or increasing cache capacity. We used Python-based data pipelines to aggregate log metrics, correlate patterns, and trigger API-based commands via MongoDB Ops Manager or custom automation services.

It wasn’t just the anomalies that mattered, but the patterns. The AI could detect when small deviations in replica lag, combined with increased query failures, historically led to eventual node failures. Identifying weak signals like this allowed us to intervene early by rerouting traffic, initiating rebalancing, or increasing priority for critical processes.

This proactive approach reduced unplanned outages by nearly a third. It was about preventing the need for reaction by identifying weak signals early and intervening intelligently.

Operational Gains with Generative AI for Query Automation

Another persistent pain point was ad hoc query generation and documentation maintenance. Junior DBAs often struggled to craft performant, accurate queries under time pressure, particularly on unfamiliar, evolving schemas. This resulted in inefficient queries, high resource consumption, or inconsistent documentation.

We piloted a fine-tuned language model trained on production query libraries and metadata. This AI assistant converted natural language prompts like “fetch all users from California with a last purchase in the last 30 days” into optimized SQL or MongoDB queries. It generated explain plans and recommended indexes for query performance optimization.

In parallel, schema parsing and documentation automation systems generated Markdown-formatted documentation for collections, indexes, and relationships. Every schema change triggered updates, ensuring real-time accuracy.

For index optimization, we prototyped an AI-based suggestion engine:

import pandas as pd

from sklearn.ensemble import RandomForestClassifier

data = pd.read_csv("query_patterns.csv")

X = data[['query_time','index_count','frequency']]

y = data['needs_new_index']

model = RandomForestClassifier()

model.fit(X, y)

The results were immediate: query turnaround times dropped by 60%, error rates decreased, and new DBAs onboarded faster with reliable, up-to-date documentation. Complex queries that previously took hours were now safely generated within minutes.

Lessons from Automation Failures

Not every automation succeeded. One cleanup script inadvertently deleted healthy backup files due to a missing dry-run validation mode, leading to 4TB of data loss. The issue wasn’t the automation itself but the absence of operational safeguards and approval loops.

Post-incident, we overhauled destructive automation workflows. Every potentially destructive job was redesigned to support dry-run simulations, explicit double-confirmations, and full rollback markers. Audit logs were expanded to include operation metadata, rollback options, and outcome records. These changes, though simple, prevented irreversible incidents from recurring.

This reinforced a core principle: automation multiplies outcomes. Without governance and failsafes, it magnifies mistakes as efficiently as it does successes. Reliability is earned by combining speed with operational discipline.

The Move Toward Self-Healing Infrastructure

Today, our efforts focus on building self-healing data infrastructure. We’re actively experimenting with reinforcement learning models capable of dynamically adjusting cluster parameters, cache sizes, write concerns, and sharding strategies based on live workload patterns and historical outcomes.

These models don’t just optimize configurations based on static rules. Basically, they continuously learn from operational telemetry. For instance, they detect patterns like “App Y’s checkout transactions peak at 7 PM daily, consistently increasing write latency.” Based on this, the system automatically allocates additional write capacity, optimizes indexes, or adjusts cache sizes ahead of load spikes, preventing slowdowns.

In addition to dynamic workload tuning, we’ve initiated concept work on AI-driven auto-index suggestion engines. These systems analyze historical query patterns, usage frequencies, and query plan latencies to proactively recommend new indexes or modifications to existing ones. The AI identifies underutilized or redundant indexes and suggests removal or replacement, maintaining an optimal balance between index overhead and query performance by correlating index usage statistics with query execution times.

Prototype systems detect node failures, trigger auto-resharding, reallocate workloads, and rebuild replica sets automatically. We’re integrating AI-driven scaling decisions, workload rebalance triggers, rollback-safe maintenance workflows, and predictive cluster tuning features. We’ve begun exploring anomaly-triggered automated failovers and automatic read/write routing optimizations. These capabilities, currently in limited test deployments, will soon transform DBAs from operational responders into infrastructure strategists, allowing teams to focus on proactive infrastructure design and strategic optimization rather than daily firefighting.

Final Reflections

What this journey made clear is that the complexity of distributed data systems today can no longer be handled by reactive, human-only operations. The move toward predictive, AI-assisted, and self-correcting systems is an operational necessity. AI models capable of learning workload behavior, identifying weak operational signals, and executing preemptive corrections are redefining what “database management” even means.

Automating fundamentals first, validating every operational assumption, layering AI systems only on clean, reliable, and auditable data, and prioritizing explainability and rollback before speed were what made these projects sustainable. Automation multiplies both good and bad outcomes, and without strong operational discipline and governance, it risks amplifying mistakes.

What I’ve seen is that automation, when combined with thoughtful human oversight and operational experience, changes the role of the DBA for the better. It frees experts from repetitive fire-fighting and reactive support, allowing them to focus on resilience engineering, infrastructure tuning, strategic scaling decisions, and mentoring junior teams.

The future isn’t AI versus DBAs. It’s operational partnerships between AI-driven systems and skilled DBAs who understand systems holistically, performance tradeoffs, data dependencies, application patterns, and operational consequences. When AI handles anomaly detection, dynamic scaling, backup validation, and query optimization, DBAs are elevated to infrastructure strategists.

Done right, AI-led automation transforms operational risk into operational resilience, delivering faster response times, safer failovers, smarter scaling, and cleaner data governance. This is an operational mindset shift, and those who adopt it early will build stronger, safer, and more intelligent data platforms.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Inc (MDB) Shares Down 3.29% on May 30 – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB Inc (MDB, Financial) fell 3.29% in mid-day trading on May 30. The stock reached an intraday low of $182.69, before recovering slightly to $183.14, down from its previous close of $189.37. This places MDB 50.50% below its 52-week high of $370.00 and 30.09% above its 52-week low of $140.78. Trading volume was 922,832 shares, 39.2% of the average daily volume of 2,352,027.

Wall Street Analysts Forecast

1928500394519064576.png

Based on the one-year price targets offered by 34 analysts, the average target price for MongoDB Inc (MDB, Financial) is $263.67 with a high estimate of $520.00 and a low estimate of $160.00. The average target implies an upside of 43.98% from the current price of $183.14. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 37 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 2.0, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $438.59, suggesting a upside of 139.49% from the current price of $183.135. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


24,778 Shares in MongoDB, Inc. (NASDAQ:MDB) Acquired by Lansforsakringar …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Lansforsakringar Fondforvaltning AB publ bought a new position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) in the fourth quarter, according to its most recent 13F filing with the Securities and Exchange Commission (SEC). The firm bought 24,778 shares of the company’s stock, valued at approximately $5,769,000.

A number of other large investors have also recently added to or reduced their stakes in MDB. Norges Bank purchased a new stake in shares of MongoDB in the 4th quarter valued at approximately $189,584,000. Marshall Wace LLP purchased a new stake in shares of MongoDB in the 4th quarter valued at approximately $110,356,000. Raymond James Financial Inc. purchased a new stake in shares of MongoDB in the 4th quarter valued at approximately $90,478,000. D1 Capital Partners L.P. purchased a new stake in shares of MongoDB in the 4th quarter valued at approximately $76,129,000. Finally, Amundi lifted its position in shares of MongoDB by 86.2% in the 4th quarter. Amundi now owns 693,740 shares of the company’s stock valued at $172,519,000 after acquiring an additional 321,186 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors.

Wall Street Analyst Weigh In

Several equities research analysts recently commented on MDB shares. Macquarie reduced their price target on shares of MongoDB from $300.00 to $215.00 and set a “neutral” rating for the company in a research report on Friday, March 7th. Mizuho cut their target price on shares of MongoDB from $250.00 to $190.00 and set a “neutral” rating for the company in a research report on Tuesday, April 15th. Robert W. Baird cut their target price on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Loop Capital downgraded shares of MongoDB from a “buy” rating to a “hold” rating and cut their target price for the stock from $350.00 to $190.00 in a research report on Tuesday, May 20th. Finally, Citigroup cut their target price on shares of MongoDB from $430.00 to $330.00 and set a “buy” rating for the company in a research report on Tuesday, April 1st. Nine equities research analysts have rated the stock with a hold rating, twenty-three have issued a buy rating and one has issued a strong buy rating to the company’s stock. According to data from MarketBeat.com, the stock has a consensus rating of “Moderate Buy” and an average price target of $286.88.

Get Our Latest Research Report on MongoDB

Insider Activity at MongoDB

In other MongoDB news, CFO Srdjan Tanjga sold 525 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at $1,109,903.56. This trade represents a 7.57% decrease in their position. The transaction was disclosed in a document filed with the SEC, which is available at the SEC website. Also, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction on Monday, March 3rd. The stock was sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the sale, the director now owns 1,109,006 shares in the company, valued at $300,130,293.78. This represents a 0.27% decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last three months, insiders have sold 25,203 shares of company stock worth $4,660,459. 3.60% of the stock is owned by insiders.

MongoDB Trading Down 0.3%

NASDAQ:MDB opened at $188.45 on Thursday. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $370.00. The firm’s fifty day simple moving average is $174.52 and its 200 day simple moving average is $234.21. The firm has a market capitalization of $15.30 billion, a PE ratio of -68.78 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm had revenue of $548.40 million during the quarter, compared to analyst estimates of $519.65 million. During the same quarter in the previous year, the business posted $0.86 earnings per share. As a group, analysts anticipate that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Elon Musk's Next Move Cover

Explore Elon Musk’s boldest ventures yet—from AI and autonomy to space colonization—and find out how investors can ride the next wave of innovation.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Releases MedGemma: Open AI Models for Medical Text and Image Analysis

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Google has released MedGemma, a pair of open-source generative AI models designed to support medical text and image understanding in healthcare applications. Based on the Gemma 3 architecture, the models are available in two configurations: MedGemma 4B, a multimodal model capable of processing both images and text, and MedGemma 27B, a larger model focused solely on medical text.

According to Google, the models are designed to assist in tasks such as radiology report generation, clinical summarization, patient triage, and general medical question answering. MedGemma 4B, in particular, has been pre-trained using a wide range of de-identified medical images, including chest X-rays, dermatology photos, histopathology slides, and ophthalmologic images. Both models are available under open licenses for research and development use, and come in pre-trained and instruction-tuned variants.

Despite these capabilities, Google emphasizes that MedGemma is not intended for direct clinical use without further validation and adaptation. The models are intended to serve as a foundation for developers, who can adapt and fine-tune them for specific medical use cases.

Some early testers have shared observations on the models’ strengths and limitations. Vikas Gaur, a clinician and AI practitioner, tested the MedGemma 4B-it model using a chest X-ray from a patient with confirmed tuberculosis. He reported that the model generated a normal interpretation, missing clinically evident signs of the disease:

Despite clear TB findings in the actual case, MedGemma reported: ‘Normal chest X-ray. Heart size is within normal limits. Lungs well-expanded and clear.

Gaur suggested that additional training on high-quality annotated data might help align model outputs with clinical expectations.

Furthermore, Mohammad Zakaria Rajabi, a biomedical engineer, noted interest in expanding the capabilities of the larger 27B model to include image processing:

We are eagerly looking forward to seeing MedGemma 27B support image analysis as well.

Technical documentation indicates that the models were evaluated on over 22 datasets spanning multiple medical tasks and imaging modalities. Public datasets used in training include MIMIC-CXR, Slake-VQA, PAD-UFES-20, and others. Several proprietary and internal datasets were also used under license or participant consent.

The models can be adapted through techniques like prompt engineering, fine-tuning, and integration with agentic systems using other tools from the Gemini ecosystem. However, performance can vary depending on prompt structure, and the models have not been evaluated for multi-turn conversations or multi-image inputs.

MedGemma provides an accessible foundation for research and development in medical AI, but its practical effectiveness will depend on how well it is validated, fine-tuned, and integrated into specific clinical or operational contexts.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.