Navigating Complexity, from AI Strategy to Resilient Architecture: InfoQ Dev Summit Munich 2025

MMS Founder
MMS Artenisa Chatziou

Article originally posted on InfoQ. Visit InfoQ

Here at InfoQ, we spend our days tracking the patterns and practices that senior software professionals are using to solve their toughest challenges. Currently, three key themes dominate our conversations with tech leaders across Europe: the immense pressure to integrate AI responsibly, the critical need to build secure and resilient systems, and the challenge of navigating an increasingly complex regulatory landscape.

At this year’s InfoQ Dev Summit Munich, taking place on 15–16 October, we’ll explore these pressures. We’ve curated a program that moves beyond the hype to focus on the peer-led, actionable insights you need to lead with confidence.

As always, we’ve built this conference on the core InfoQ principle: practical, unbiased content from practitioners in the trenches. No hidden product pitches. No marketing fluff. Just real-world lessons from the front lines of software development.

Here’s a preview of what we’re excited about so far:

Architecting for a Sovereign and Secure Europe

We know that data sovereignty and supply-chain security are top of mind for every architect in the EU. That’s why we’ve invited Markus Ostertag, Chief AWS Technologist at adesso, to give us a technical look at the forthcoming AWS European Sovereign Cloud.

We’ve also tasked Soroosh Khodami, Solution Architect at Code Nomads, with providing a hands-on checklist for hardening your CI/CD pipelines against the next big breach. These sessions are designed to give you the pragmatic strategies you need to build secure-by-design systems.

Moving from AI Hype to Real-World Impact

AI is reshaping our industry, but the real challenge lies in translating theory into tangible value. We’ve brought together speakers who are doing just that.

We’re excited for Patrick Debois to explore how AI is shifting the senior developer’s role from pure implementation to strategic intent. To ground this in reality, Mariia Bulycheva, Senior Applied Scientist at Zalando, will demonstrate how they are utilizing graph neural networks for large-scale personalization.

We’re kicking things off with a keynote from Katharine Jarmul on building privacy-first ML pipelines and closing with Tejas Kumar from DataStax, who will explore what’s next for Generative AI. This is about giving you the tools to lead your team’s AI adoption responsibly and effectively.

Building Resilient Systems Under Pressure

Finally, we aim to demonstrate what it truly takes to build resilient, high-performance systems. We’re particularly looking forward to hearing from Chris Tacey-Green, Head of Engineering at Investec. He won’t just show us the successful event-driven patterns behind their real-time payment system; he’s promised to share the scars and critical trade-offs they navigated on Azure.

Similarly, Daniele Frasca of Seven.One Entertainment Group will walk us through the architectural evolution required to meet the intense demands of a live TV streaming backend.

This is just a small sample of the talks. What excites us most is bringing these innovators together in one place. The summit is your opportunity to connect with peers, validate your own architectural thinking, and walk away with ideas you can implement immediately.

If you’re grappling with these challenges, we hope you’ll join us in Munich.

The InfoQ Dev Summit Munich takes place on 15-16 October 2025. Limited early bird tickets are available now.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Acquired by Cambridge Investment Research Advisors Inc.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Cambridge Investment Research Advisors Inc. grew its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 4.0% in the first quarter, according to its most recent Form 13F filing with the Securities and Exchange Commission. The institutional investor owned 7,748 shares of the company’s stock after buying an additional 298 shares during the quarter. Cambridge Investment Research Advisors Inc.’s holdings in MongoDB were worth $1,359,000 as of its most recent filing with the Securities and Exchange Commission.

Other institutional investors have also modified their holdings of the company. Strategic Investment Solutions Inc. IL acquired a new stake in shares of MongoDB in the fourth quarter valued at $29,000. Coppell Advisory Solutions LLC boosted its holdings in MongoDB by 364.0% in the fourth quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock valued at $54,000 after purchasing an additional 182 shares during the period. Smartleaf Asset Management LLC boosted its holdings in MongoDB by 56.8% in the fourth quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock valued at $87,000 after purchasing an additional 134 shares during the period. J.Safra Asset Management Corp boosted its holdings in MongoDB by 72.0% in the fourth quarter. J.Safra Asset Management Corp now owns 387 shares of the company’s stock valued at $91,000 after purchasing an additional 162 shares during the period. Finally, Aster Capital Management DIFC Ltd purchased a new position in MongoDB in the fourth quarter valued at $97,000. 89.29% of the stock is owned by institutional investors and hedge funds.

Analyst Upgrades and Downgrades

A number of research analysts recently issued reports on the company. Daiwa Capital Markets assumed coverage on MongoDB in a report on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 target price for the company. The Goldman Sachs Group decreased their target price on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a report on Thursday, March 6th. William Blair reaffirmed an “outperform” rating on shares of MongoDB in a report on Thursday, June 26th. Truist Financial decreased their price target on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Finally, Daiwa America raised MongoDB to a “strong-buy” rating in a report on Tuesday, April 1st. Eight equities research analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has issued a strong buy rating to the stock. According to MarketBeat.com, the stock currently has an average rating of “Moderate Buy” and a consensus target price of $282.47.

<!—->

Check Out Our Latest Stock Report on MongoDB

MongoDB Stock Up 3.2%

Shares of MDB stock opened at $211.05 on Friday. The stock has a market capitalization of $17.24 billion, a PE ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 52-week low of $140.78 and a 52-week high of $370.00. The stock’s 50-day moving average price is $194.66 and its 200 day moving average price is $215.72.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The firm had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter in the previous year, the firm posted $0.51 EPS. The firm’s revenue for the quarter was up 21.8% on a year-over-year basis. Analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

Insiders Place Their Bets

In other news, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction that occurred on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total transaction of $468,000.00. Following the sale, the director owned 1,107,006 shares of the company’s stock, valued at $259,039,404. The trade was a 0.18% decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this hyperlink. Also, Director Hope F. Cochran sold 1,174 shares of the business’s stock in a transaction that occurred on Tuesday, June 17th. The stock was sold at an average price of $201.08, for a total value of $236,067.92. Following the sale, the director directly owned 21,096 shares in the company, valued at $4,241,983.68. This trade represents a 5.27% decrease in their position. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 28,999 shares of company stock worth $6,728,127. Company insiders own 3.10% of the company’s stock.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Janney Montgomery Scott LLC Buys Shares of 2,612 MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Janney Montgomery Scott LLC bought a new position in MongoDB, Inc. (NASDAQ:MDBFree Report) during the 1st quarter, according to its most recent disclosure with the Securities and Exchange Commission. The institutional investor bought 2,612 shares of the company’s stock, valued at approximately $458,000.

A number of other hedge funds and other institutional investors also recently made changes to their positions in MDB. Vanguard Group Inc. boosted its stake in shares of MongoDB by 0.3% during the 4th quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after acquiring an additional 23,942 shares in the last quarter. Franklin Resources Inc. lifted its holdings in MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after purchasing an additional 181,962 shares during the last quarter. Geode Capital Management LLC boosted its position in MongoDB by 1.8% during the fourth quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock worth $290,987,000 after purchasing an additional 22,106 shares during the period. First Trust Advisors LP grew its holdings in MongoDB by 12.6% during the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after purchasing an additional 95,893 shares during the last quarter. Finally, Norges Bank bought a new position in shares of MongoDB in the fourth quarter valued at approximately $189,584,000. 89.29% of the stock is owned by hedge funds and other institutional investors.

Wall Street Analysts Forecast Growth

Several research firms recently issued reports on MDB. Macquarie reaffirmed a “neutral” rating and set a $230.00 price objective (up from $215.00) on shares of MongoDB in a report on Friday, June 6th. DA Davidson restated a “buy” rating and set a $275.00 price target on shares of MongoDB in a report on Thursday, June 5th. UBS Group raised their price target on MongoDB from $213.00 to $240.00 and gave the company a “neutral” rating in a research report on Thursday, June 5th. Royal Bank Of Canada reissued an “outperform” rating and set a $320.00 price objective on shares of MongoDB in a research report on Thursday, June 5th. Finally, Oppenheimer decreased their target price on MongoDB from $400.00 to $330.00 and set an “outperform” rating for the company in a research note on Thursday, March 6th. Eight investment analysts have rated the stock with a hold rating, twenty-five have issued a buy rating and one has given a strong buy rating to the stock. According to MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and a consensus target price of $282.47.

Get Our Latest Stock Analysis on MongoDB

Insider Activity at MongoDB

In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction on Tuesday, June 17th. The shares were sold at an average price of $201.08, for a total transaction of $236,067.92. Following the completion of the sale, the director owned 21,096 shares of the company’s stock, valued at approximately $4,241,983.68. The trade was a 5.27% decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. Also, CEO Dev Ittycheria sold 25,005 shares of the firm’s stock in a transaction on Thursday, June 5th. The stock was sold at an average price of $234.00, for a total value of $5,851,170.00. Following the completion of the transaction, the chief executive officer owned 256,974 shares in the company, valued at $60,131,916. This represents a 8.87% decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 28,999 shares of company stock worth $6,728,127 over the last ninety days. Company insiders own 3.10% of the company’s stock.

MongoDB Stock Performance

NASDAQ MDB opened at $211.05 on Friday. The stock has a market capitalization of $17.24 billion, a P/E ratio of -185.13 and a beta of 1.41. MongoDB, Inc. has a 52 week low of $140.78 and a 52 week high of $370.00. The stock has a fifty day moving average of $194.66 and a 200-day moving average of $215.72.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 3.16% and a negative net margin of 4.09%. The business had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. During the same quarter last year, the business earned $0.51 earnings per share. The firm’s quarterly revenue was up 21.8% compared to the same quarter last year. Equities research analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Ten Starter Stocks For Beginners to Buy Now Cover

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


SQL to NoSQL: Modernizing data access layer with Amazon DynamoDB – AWS

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

In Part 1 of our series, we explored how to effectively migrate from SQL to Amazon DynamoDB. After establishing data modeling strategies discussed in Part 2, we now explore key considerations to analyze and design filters, pagination, edge cases, and aggregations, building upon the data models designed to create an efficient data access layer. This component bridges your application with DynamoDB features and capabilities.

The transition from SQL-based access patterns to a DynamoDB API-driven approach presents opportunities to optimize how your application interacts with its data layer. This final part of our series focuses on implementing an effective abstraction layer and handling various data access patterns in DynamoDB.

Redesign the entity model

The entity model, which represents the data structures in your application, will need to be redesigned to match the DynamoDB data model. This might involve de-normalizing the models and restructuring relationships between entities.In addition, consider the effort involved in the following configurations:

  • DynamoDB attribute annotation– Annotate entity properties with DynamoDB specific attributes, including partition key, sort key, local secondary index (LSI) information, and global secondary index (GSI) information. For example, using a .NET object persistence model requires mapping your classes and properties with DynamoDB tables and attributes.
  • Key prefix configuration – In a single table design, you might have to configure partition and sort key prefixes for your entity models. Analyze how these prefixes will be used for querying within your data access layer. The following code is a sample implementation of key prefix configuration in entity models:
public class Post
{
    private const string PREFIX = "POST#";
    
    public string Id { get; private set; }
    public string Content { get; private set; }
    public string AuthorId { get; private set; }

    public Post(string id, string content, string authorId)
    {
        Id = id;
        Content = content;
        AuthorId = authorId;
    }

    // Property that automatically adds prefix
    public string PartitionKey => $"{PREFIX}{Id}";
}

// Usage example
var post = new Post("123", "Hello World", "USER#456");
var queryKey = post.PartitionKey; // Gets "POST#123"

  • Mapping rule redesign – Due to changes in your entity models, existing mapping rules between your application’s view models and the entity models might need to be redesigned.

Designing the DynamoDB API abstraction layer

The DynamoDB API abstraction layer encapsulates the underlying DynamoDB operations while providing your application with a clean interface. Let’s explore what you might need to implement in this layer.

Error handling and retries

High-traffic scenarios often lead to transient failures that need handling. For instance, during viral content surges or when a celebrity post gains sudden attention, you might encounter throughput exceeded exceptions. You might need to implement the following:

Batch operation management

Applications often need to process multiple items efficiently to provide a good user experience. Consider scenarios like loading a personalized news feed that combines posts from multiple followed users. You might need to implement the following:

  • Automatic chunking of requests within DynamoDB limits
  • Parallel processing for performance optimization
  • Recovery mechanisms for partial batch failures
  • Progress tracking for long-running operations

Loading related entity data

When migrating from a relational database to DynamoDB, a common perception is relational data is often denormalized and related data access becomes straightforward. However, this isn’t always true. Although in some cases, relationships might be modeled using a single-item modeling strategy, based on cost and performance considerations, the relationships might have been modeled using different strategies like vertical partitioning or composite sort keys.

When adapting to DynamoDB, you might have to develop helper methods in your abstraction layer to load the relational data of an entity (navigation properties) efficiently. These methods need to consider your application architecture, access patterns, and data modeling strategies. For example, in our social media application, loading comments for a post might require different approaches based on the chosen modeling strategy—from simple attribute retrieval in single-item models to query operations in vertical partitioning.

For entities related using a single-item strategy, specific loading logic might not be necessary because all data is retrieved in a single API operation. However, for other modeling strategies like vertical partitioning, your abstraction layer methods need to handle efficient querying based on filter conditions and pagination. For instance, when comments are stored as separate items sharing the post’s partition key, the method must efficiently query and paginate through the related items.

Building upon the batch operation capabilities, you can extend these methods to handle loading related data for multiple items. For example, when loading comments for multiple posts, use BatchGetItem to do the following:

  • Use established batching mechanisms to group requests
  • Apply retries and error handling strategies
  • Provide consistent interfaces for both single and bulk operations

When using GSIs, you might need to retrieve additional attribute data not included in the GSI projection. Design strategies to efficiently load the required data while minimizing API calls and optimizing performance and cost.Your abstraction layer method might have to provide the following:

  • Consistent interfaces for loading related data
  • Optimization of API calls and cost
  • Simplified maintenance through centralized implementation

The following code is a sample implementation of loading navigation properties:

// Entity with navigation property
public class Post
{
    public string Id { get; set; }
    public string Content { get; set; }
    public IEnumerable Comments { get; set; }
}

// Interface for loading related data
public interface INavigationPropertyManager 
{
    Task<IEnumerable> LoadRelatedItemsAsync(string parentId);
    Task<IDictionary<string, IEnumerable>> LoadRelatedItemsInBatchAsync(IEnumerable parentIds);
}

// Service using the loader
public class PostService
{
    private readonly INavigationPropertyManager _navigationPropertyManager;

    public PostService(INavigationPropertyManager navigationPropertyManager)
    {
        _navigationPropertyManager = navigationPropertyManager;
    }
    
    public async Task<IEnumerable> GetPostCommentsAsync(string postId)
    {
        return await _navigationPropertyManager.LoadRelatedItemsAsync(postId);
    }
}

When designing these methods, analyze your current application’s loading patterns and evaluate whether maintaining similar patterns in DynamoDB can benefit your application’s performance and user experience.

Response mapping

As applications evolve, their data structures and requirements change over time. For instance, when adding new features like post reactions beyond simple likes, or introducing rich media content in user profiles, backward compatibility becomes crucial. You might need to implement mapping logic to perform the following functions:

  • Convert DynamoDB items to domain objects
  • Handle backward compatibility as data models evolve
  • Manage default values for missing attributes
  • Support different versions of the same entity

Filter expression building

Complex data retrieval needs often arise in modern applications. For instance, when users want to find posts from a specific time frame that have gained significant engagement, or when filtering comments based on user interaction patterns. Your abstraction layer might need to do the following:

  • Convert complex search criteria into DynamoDB filter expressions
  • Handle multiple filter conditions dynamically
  • Manage expression attribute names and values
  • Support nested attribute filtering

Pagination implementation

Efficient data navigation is important for user experience. Consider scenarios like users scrolling through their infinite news feed, or moderators reviewing comments on viral posts. You might need to implement the following:

  • Token-based pagination using LastEvaluatedKey
  • Configurable page size handling
  • Efficient large result set processing
  • Consistent pagination behavior across different queries

The following code is a sample implementation of pagination:

// Enhanced interface adding pagination support
public interface INavigationPropertyManager 
{
    Task<IEnumerable> LoadRelatedItemsAsync(string parentId);
    Task<IDictionary<string, IEnumerable>> LoadRelatedItemsInBatchAsync(IEnumerable parentIds);
    // method for paginated loading
    Task<PagedResult> LoadRelatedItemsPagedAsync(string parentId, PaginationOptions options);
}

public class PaginationOptions
{
    public int PageSize { get; set; } = 20;
    public string ExclusiveStartKey { get; set; }
}

public class PagedResult
{
    public IEnumerable Items { get; set; }
    public string LastEvaluatedKey { get; set; }
}

// With pagination support
public class PostService
{
    private readonly INavigationPropertyManager _navigationPropertyManager;
    public PostService(INavigationPropertyManager navigationPropertyManager)
    {
        _navigationPropertyManager = navigationPropertyManager;
    }
    
    public async Task<PagedResult> GetPostCommentsPagedAsync(
        string postId, 
        int pageSize = 20, 
        string nextToken = null)
    {
        var options = new PaginationOptions 
        { 
            PageSize = pageSize,
            ExclusiveStartKey = nextToken
        };
        
        return await _navigationPropertyManager.LoadRelatedItemsPagedAsync(postId, options);
    }
}

Data encryption

Protecting sensitive user data is paramount in modern applications. You might need to implement the following:

Observability

Monitoring application health and performance is essential. When tracking viral post performance or user engagement patterns during peak usage times, detailed insights become important. Consider monitoring the following Amazon CloudWatch metrics:

  • Request latency tracking – Monitor DynamoDB metrics like SuccessfulRequestLatency, and create custom metrics to track latency because of the exceptions such as TransactionConflict and ConditionalCheckFailedRequests
  • Capacity consumption – Track ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits
  • Error rates and patterns – Monitor ConditionalCheckFailedRequests, SystemErrors, UserErrors, and related metrics
  • Query performance – Track ThrottledRequests, ReadThrottleEvents, WriteThrottleEvents, and custom metrics to monitor query or scan efficiency (ScannedCount or Count), client-side filtering duration, and external service call latencies

Transaction management

Maintaining data consistency is critical in many scenarios. When updating user profiles along with their post metadata, or managing comment threads with their associated counters, transactional consistency becomes important. You might need to implement the following:

  • Transactional operation handling
  • Timeout and conflict management
  • Compensation logic for failed transactions

This abstraction layer helps your application interact with DynamoDB efficiently while maintaining clean separation of concerns and consistent behavior across all data access operations. When implementing these features in your abstraction layer, consider approaches to monitor and optimize their effectiveness. For instance, you can implement a centralized error tracking mechanism using custom CloudWatch metrics for different DynamoDB operations. These insights can help continuously improve your abstraction layer’s reliability and performance.

Handling filters

After you design your DynamoDB API abstraction layer with core operations and data loading capabilities, analyze how to adapt existing query patterns to align with the DynamoDB querying approach. As a first step, examine how query filter conditions transition from relational SQL querying to DynamoDB patterns.

Whereas relational databases use query optimizers for WHERE clause filters, DynamoDB empowers developers with precise control over query execution through its purposeful design of base tables and indexes. This design enables predictable and consistent performance at scale.

DynamoDB processes queries in a two-step manner. First, it retrieves items that match the key condition expression against partition and sort keys. Then, before returning the results, it applies filter expressions on non-key attributes. Although filter expressions don’t reduce RCU consumption as the entire result set is read before filtering, they reduce data transfer costs and improve application performance by filtering data at the DynamoDB service level.

Analyze your application’s data access patterns to optimize your queries for this two-step process. Consider developing a design approach that facilitates seamless translation to DynamoDB expression statements, which improves productivity when rewriting a large set of queries. Build upon your DynamoDB API abstraction layer’s helper methods for constructing key conditions and filter expressions. For example, in our social media application, we developed methods that handle common filtering scenarios like date range filters or engagement metric thresholds. These methods can be combined and reused across different query requirements, reducing development effort and maintaining consistency in how filters are applied.

Handling complex filter requirements

DynamoDB flexible expression capabilities handle many filtering scenarios directly, and you can implement client-side filtering for any additional requirements. Some examples include:

  • Unsupported functions or methods – When working with filters that reference system or user-defined functions, retrieve the data from DynamoDB and apply these specialized filters at the application layer. For SQL queries that use functions like string operations (SUBSTRING, CONCAT), date/time calculations (DATEADD, DATEDIFF), or mathematical functions (ROUND, CEILING), retrieve the base data and apply these operations in your application layer. Consider designing pre-calculated attributes during data model design to avoid client-side filtering that can impact performance.
  • Loading related entity data – For queries that filter based on attributes from related entities, your application might need to load data from multiple DynamoDB tables or item collections and apply filters at the application layer. For example, when finding posts based on author characteristics or comment patterns, design efficient data retrieval strategies and consider whether denormalization might be appropriate for frequently accessed patterns.
  • Integrating with external data sources – In microservice architectures, filtering might require data from other services or databases. Design efficient data retrieval strategies and consider implementing appropriate caching mechanisms to minimize the performance impact of cross-service filtering. Analyze these scenarios to determine the best approach for your specific use case.

Let’s examine the use case of retrieving post comments by active authors and sentiment score, requiring data from an external user service and analytics database:

/*
Original SQL Query demonstrating filters across different data sources:
SELECT c.*, u.name, u.profile_pic, u.status, m.sentiment_score
FROM comments c
JOIN users u ON c.user_id = u.id 
JOIN comment_analytics m ON c.id = m.comment_id
WHERE c.post_id = '123'
  AND c.created_at > DATEADD(year, -1, GETUTCDATE())
  AND u.status = 'ACTIVE'
  AND m.sentiment_score > 0.8
*/

public class Comment
{   
    [DynamoDBHashKey]
    public string PostId { get; set; }
    [DynamoDBRangeKey]
    public string CreatedAt { get; set; }
    [DynamoDBProperty]
    public string CommentId { get; set; }
    [DynamoDBProperty]
    public string UserId { get; set; }
    [DynamoDBProperty]
    public string Content { get; set; }
}

public class PostCommentService
{
    private readonly IDynamoDBContext _dynamoDbContext;
    private readonly IUserService _userService;
    private readonly ICommentAnalytics _analyticsDb;

   //Initialize readonly fields in constructor
   
    public async Task<IEnumerable> GetPostCommentsAsync(
        string postId, 
        DateTime startDate,
        double minSentimentScore)
    {
        // Step 1: Query DynamoDB for comments
        var comments = await _dynamoDbContext.QueryAsync(postId,
                    QueryOperator.GreaterThanOrEqual,
                    new[] { startDate.ToString("yyyy-MM-dd") })
                .GetRemainingAsync();
                                
        // Step 2: Get user details and filter by active status
        var userIds = comments.Select(c => c.UserId).Distinct();
        var userDetails = await _userService.GetUserDetailsAsync(userIds);
        comments = comments.Where(c => userDetails[c.UserId].Status == "ACTIVE");

        // Step 3: Apply sentiment score filter from analytics
        var commentIds = comments.Select(c => c.CommentId);
        var sentimentScores = await _analyticsDb.GetSentimentScoresAsync(commentIds);
        
        return comments.Where(c => sentimentScores[c.CommentId] > minSentimentScore);
    }
}

When analyzing your existing queries, identify scenarios requiring client-side filtering and evaluate their performance implications. This analysis helps you do the following:

  • Estimate development effort
  • Plan optimization strategies
  • Determine caching needs
  • Assess impact on response times

Consider these factors while designing your data access layer to achieve efficient query handling in your DynamoDB implementation. As you implement your design, consider approaches to monitor and optimize filter operations. For instance, you can track metrics about filter usage patterns and their performance impact, helping you validate your implementation decisions and identify optimization opportunities as your application evolves.

Handling pagination

Evaluate your application’s current pagination strategy and align it with DynamoDB capabilities. Whereas relational database applications often display total page numbers to users, DynamoDB is optimized for forward-only, key-based pagination using LastEvaluatedKey. Because implementing features like total record counts requires full table scans, consider efficient alternatives that take advantage of DynamoDB strengths. Discuss with stakeholders how pagination approaches like cursor-based navigation or “load more” patterns can provide excellent user experience while maintaining optimal performance.

For applications requiring result set size context, to obtain item counts in DynamoDB, consider implementing counters instead of calculating real-time totals. In our social media application, we store and update post counts per user during write operations, allowing us to show information like “Viewing 50 of approximately 1,000 posts” without requiring full table scans. However, these counters become less accurate when queries include filters. For common, predefined filters, separate counters can be maintained (e.g., posts_count_last_30_days). For dynamic filter combinations, consider alternative patterns such as infinite scroll that align better with DynamoDB’s pagination model while providing good user experience.

When designing pagination in your data access layer for DynamoDB, understand its core pagination behavior. DynamoDB might not return all matching items in a single API call due to two key constraints: the "Limit" parameter and the 1 MB maximum read size. Consequently, your implementation needs to handle multiple API calls using LastEvaluatedKey to fulfill pagination requirements. Design your data access layer to manage this process transparently, maintaining a clean separation between pagination mechanics and business logic.

Consider the following factors when implementing DynamoDB pagination:

  • Filtering impact analysis – Evaluate your query filters, including those applied through filter expressions or client-side filtering. Assess the cardinality of your data to understand what percentage of query results are filtered out. This analysis helps determine an appropriate "Limit" parameter that aligns with your application’s page size needs while accounting for filtered results.
  • Limit parameter optimization – Setting the limit parameter requires careful consideration of tradeoffs. Setting it too low might lead to unnecessary API calls, impacting performance. Conversely, setting it too high might retrieve excess data, also affecting performance and cost. Aim for a limit that closely matches your desired page size while accounting for filtering effects.
  • Performance monitoring – Implement proper monitoring for your pagination implementation to track efficiency metrics like the number of API calls per page request and average response times. Use this data to fine-tune your pagination parameters and identify opportunities for optimization. Consider implementing appropriate caching strategies for frequently accessed pages to improve performance further.

By considering these aspects and maintaining proper monitoring, you can implement an efficient pagination process that optimizes data retrieval while effectively managing performance and costs. For instance, you can track metrics like the average number of DynamoDB calls per page request and result set distributions. These insights can help fine-tune your implementation parameters and identify opportunities for optimization as your application grows.

Handling edge cases

When migrating your data access layer to DynamoDB, identify and address edge cases that involve large-scale data operations. Understanding and planning for these edge cases helps make sure your DynamoDB implementation remains performant and cost-effective under extreme conditions:

  • Predictable high-volume operations – Consider a scenario where a user with millions of followers posts content, requiring updates to news feeds or notification tables for all followers. These are operations where we can determine the scale in advance based on known factors like follower count. Design patterns like write sharding or batch processing can help manage these scenarios effectively. For instance, you might implement a fan-out-on-read approach for high-follower accounts instead of updating all follower feeds immediately.
  • Unexpected scale events – Some operations can experience sudden, unpredictable spikes in activity. For example, when a post unexpectedly receives massive engagement, generating thousands of reads and writes per second. Unlike predictable high-volume operations where we can plan our data model and access patterns in advance, these scenarios require strategies like dynamic scaling, caching, and asynchronous patterns to handle sudden load spikes while maintaining application performance.

When analyzing your application for edge cases, consider these factors:

  • Scale implications of high-volume operations
  • Burst capacity requirements for sudden traffic spikes
  • Cost implications of different implementation approaches
  • Performance impact on other application functions

Regular load testing and monitoring of these edge case scenarios helps validate your implementation approaches and identify potential optimizations. When implementing your edge case handling strategy, consider approaches to detect and respond to these scenarios in production. For instance, you can set up monitoring mechanisms to track partition key usage patterns and identify potential hot partition situations before they impact performance. This proactive approach makes sure your application can handle extreme conditions while maintaining performance and managing costs effectively.

Handling aggregations and de-normalized data

When migrating from relational databases to Amazon DynamoDB, aggregations and de-normalized data can have an impact on your existing command queries, which you might have to consider while redesigning in your data access layer.

Managing aggregations

Relational databases typically use JOINs and GROUP BY clauses for real-time aggregations, such as calculating total posts per user or comments per post. DynamoDB partition and sort key-based access patterns support different approaches for handling aggregations. In our social media application, we maintain aggregation entities to store pre-calculated values. For example, we store a user’s total posts, total followers, and engagement metrics as separate items that update when corresponding actions occur. This pattern can be applied to any application where real-time aggregations are frequently accessed.

When implementing aggregation strategies, analyze the following:

  • Which aggregations are frequently accessed
  • Frequency of updates to aggregated values
  • Performance requirements for aggregation queries
  • Consistency requirements for aggregated data

Handling de-normalized data

DynamoDB often requires data de-normalization based on access pattern requirements. For instance, in our application, we store user status directly on post entities to enable efficient filtering. This approach trades off increased write operations for improved read efficiency.

When analyzing de-normalization needs, consider the following:

  • Frequency of attribute access
  • Update patterns of source data
  • Impact on write operations
  • Required consistency level

Managing updates

To manage updates to aggregated entities or de-normalized attributes, you can choose between the following methods:

  • Synchronous updates – Our application uses this approach for critical user-facing features where immediate consistency is required. For example, updating like counts on popular posts uses transactions to maintain consistency, though this might impact write performance during high-traffic periods.
  • Asynchronous updates – We implement this pattern using Amazon DynamoDB Streams and AWS Lambda, which is a loosely coupled architecture with less performance impact for less time-critical updates. For instance, updating trending post rankings or user activity summaries can tolerate eventual consistency in favor of better performance.

Analytical processing

For complex analytical queries or large-scale reporting needs, consider complementary services:

By analyzing your aggregation and analytical requirements and selecting appropriate tools and approaches, you can make sure your modernized data access layer effectively handles these data processing needs while taking advantage of the strengths of DynamoDB. When implementing your aggregation strategy, consider approaches to monitor the health of your solution. For instance, you can track metrics about aggregation update latency and consistency patterns. These insights can help validate your implementation choices and make sure your aggregation strategy maintains optimal performance as your application scales.

Conclusion

In this post, we explored strategies for modernizing your application’s data access layer for DynamoDB. The transition from SQL-based patterns to a DynamoDB API-driven approach offers opportunities to optimize how your application interacts with its data.

Building on the data models designed in Part 2, we examined how to implement efficient query patterns through DynamoDB features for filtering, pagination, and aggregation. The abstraction layer patterns we discussed can help create a clean separation between your application logic and DynamoDB operations while maintaining consistent performance.

The DynamoDB approach to data access differs from traditional SQL patterns, but with proper implementation of the strategies we’ve covered—from error handling to edge cases—you can build a robust data access layer that takes advantage of DynamoDB capabilities effectively. Close collaboration between database and application teams helps create solutions that balance performance, cost optimization, and scalability.Begin implementing these patterns by creating focused proof-of-concept implementations. Test your abstraction layer design with representative workloads to validate your approach before expanding to your full application scope.


About the authors

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Launches Gemini CLI: Open-Source Terminal AI Agent for Developers

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Google has released Gemini CLI, a new open-source AI command-line interface that brings the full capabilities of its Gemini 2.5 Pro model directly into developers’ terminals. Designed for flexibility, transparency, and developer-first workflows, Gemini CLI provides high-performance, natural language AI assistance through a lightweight, locally accessible interface.

Gemini CLI is available today under the Apache 2.0 license, enabling developers to inspect, modify, and extend the source code. It features deep integration with Gemini Code Assist, allowing developers to seamlessly shift between IDE-based and terminal-based AI assistance using the same model backbone.
Key capabilities of Gemini CLI include:

  • Support for Gemini 2.5 Pro with a 1 million token context window
  • Prompt grounding with Google Search, enabling real-time web context integration
  • Built-in support for the Model Context Protocol (MCP) and custom system prompts (via GEMINI.md)
  • Non-interactive scripting mode, allowing terminal automation with AI as part of CI/CD workflows

Once authenticated with a personal Google account, developers can access Gemini CLI for free under a Gemini Code Assist license. Advanced users can alternatively configure Gemini CLI with API keys from Google AI Studio or Vertex AI for more control or higher-volume use cases.

Gemini CLI supports a range of developer workflows, including:

  • Writing, refactoring, and debugging code
  • Automating terminal tasks and shell scripting
  • Researching technical topics or documentation
  • Generating structured content or markdown
  • Performing local file and system-level operations

The project is intended to evolve with community input, and contributions are encouraged via the Gemini CLI GitHub repository. Google highlights that this release continues the company’s shift toward open, extensible AI tooling aimed at democratizing access to powerful models across platforms.

However, initial user feedback points to areas that still need refinement. One developer commented:

Tried a bit just now; for my not-too-difficult task, it firstly searched a codebase for 4 minutes, then ended up asking to explore the code in another codebase, to which all calls were commented out. Doesn’t feel close to Claude Code yet.

Another Reddit user added:

Well, it is fine until 5 minutes into the session, when it switches the model to flash, which is entirely awful at coding.

For developers who prefer working in an IDE, Gemini Code Assist now shares agent technology with Gemini CLI. This includes multi-step planning, auto-recovery, and reasoning-based code generation in VS Code, offered free across all tiers.

Gemini CLI is available today at cli.gemini.dev and requires only a Google login to get started.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NoSQL Market Growing at 28.1% CAGR | Reach USD 86.3 Billion by 2032 Globally

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

NoSQL Market Share

NoSQL Market Share

WILMINGTON, DE, UNITED STATES, July 3, 2025 /EINPresswire.com/ — Allied Market Research published a new report, titled, “NoSQL Market Growing at 28.1% CAGR | Reach USD 86.3 Billion by 2032 Globally.” The report offers an extensive analysis of key growth strategies, drivers, opportunities, key segments, Porter’s Five Forces analysis, and competitive landscape. This study is a helpful source of information for market players, investors, VPs, stakeholders, and new entrants to gain a thorough understanding of the industry and determine steps to be taken to gain competitive advantage.

The global NoSQL market size was valued at USD 7.3 billion in 2022, and is projected to reach USD 86.3 billion by 2032, growing at a CAGR of 28.1% from 2023 to 2032.

Request Sample Report (Get Full Insights in PDF – 350 Pages) at: https://www.alliedmarketresearch.com/request-sample/640

Driving Factors NoSQL Market

The rise in demand for big data analytics, enterprise-wide need for scalable and flexible database solutions, and growth in adoption of cloud computing technology are expected to drive the global NoSQL market growth. However, the high complexities of administrating NoSQL databases and the potential threat of data-related inconsistencies are expected to hinder market growth. Furthermore, the rise in adoption of advanced technologies such as AI & ML offers lucrative market opportunities for the market players.

Market Segmentation NoSQL Market

The NoSQL market is segmented on the basis of type, application, industry vertical, and region. On the basis of type, it is categorized into key-value store, document database, column-based store, and graph database. On the basis of application, it is divided into data storage, mobile apps, data analytics, web apps, and others. The data storage segment is further sub-segmented into distributed data depository, cache memory, and metadata store. On the basis of industry vertical, it is categorized into retail, gaming, IT, and others. On the basis of region, the market is analyzed across North America, Europe, Asia-Pacific, and LAMEA.

Key Market Players in NoSQL Market

The report analyzes the profiles of key players operating in the NoSQL market such as Aerospike Inc., Couchbase Inc., IBM Corporation, Neo4j, Inc., Objectivity, Inc, Oracle Corporation, Progress Software Corporation, Riak, ScyllaDB, Inc. and Apache Software Foundation. These players have adopted various strategies such as collaboration, acquisition, and product launch to increase their market penetration and strengthen their position in the NoSQL market.

If you have any questions, Please feel free to contact our analyst at: https://www.alliedmarketresearch.com/connect-to-analyst/640

North America region to maintain its dominance by 2032

On the basis of region, the North America segment held the highest market share in terms of revenue in 2022, accounting for less than two-fifths of the NoSQL market revenue. The increase in the usage of NoSQL solutions in businesses to improve businesses and the customer experience is anticipated to propel the growth of the market in this region. However, the Asia-Pacific segment is projected to manifest the highest CAGR of 26.8% from 2023 to 2032. Countries such as China, India, and South Korea are at the forefront, embracing digital technologies to enhance their effectiveness and competitiveness, further expected to contribute to the growth of the market in this region.

The key-value store segment to maintain its leadership status throughout the forecast period

On the basis of type, the key-value store segment held the highest market share in 2022, accounting for less than two-fifths of the NoSQL market revenue, and is estimated to maintain its leadership status throughout the forecast period. This is attributed to the high scalability and the ability to support multiple data models on a single database with faster access would continue driving its application. However, the document database segment is projected to manifest the highest CAGR of 29.0% from 2023 to 2032, as these database services help to reduce the time and costs associated with optimizing systems in the initial phase of deployment.

The web apps segment to maintain its lead position during the forecast period

On the basis of application, the web apps segment accounted for the largest share in 2022, contributing to more than one-fourth of the NoSQL market revenue, owing to growth in the usage of website-based solutions in several industries. However, the mobile apps segment is expected to portray the largest CAGR of 31% from 2023 to 2032 and is projected to maintain its lead position during the forecast period. It provides several advantages such as reducing costs, supporting business, and effectively controlling the business environment in the organization.

The IT segment to maintain its lead position during the forecast period

On the basis of industry vertical, the IT segment accounted for the largest share in 2022, contributing to less than two-fifths of the NoSQL market revenue, owing to the development of digital technologies in IT sector. However, the gaming segment is projected to manifest the highest CAGR of 35.4% from 2023 to 2032. The surge in the implementation of automation trends and the increase in utilization of digital technology in this sector is expected to provide lucrative opportunities for the market.

Buy Now & Get Exclusive Discount on this Report (350 Pages PDF with Insights, Charts, Tables, and Figures) at: https://www.alliedmarketresearch.com/NoSQL-market/purchase-options

COVID-19 Scenario

● The NoSQL market witnessed stable growth during the COVID-19 pandemic, owing to the dramatically increased dependence on digital devices. The surge in online presence of people during the period of COVID-19 induced lockdowns and social distancing policies fueled the need for NoSQL solutions.

● In addition, with the majority of the population confined in homes during the early stages of the COVID-19 pandemic, businesses needed to optimize their business operations and offerings to maximize their revenue opportunities while optimizing their operations to support the rapidly evolving business environment, post the outbreak of the COVID-19 pandemic.

Recent Partnerships in the NoSQL Market:

● For instance, in May 2023, MongoDB partnered with Alibaba Cloud, MongoDB and Alibaba Cloud have also expanded aspects of the alliance including new joint marketing efforts, joint revenue commitments, and tighter technology integrations. Customers can rapidly innovate and scale their business while reducing costs and increasing efficiency on ApsaraDB for MongoDB by using the MongoDB database with Alibaba Cloud’s distinctive features.

Recent Product Launches in the NoSQL Market:

● In July 2021, Couchbase launched its updated NoSQL database providing users with a series of new features that aim to narrow the gap between NoSQL and relational databases.

Thanks for reading this article, you can also get an individual chapter-wise section or region-wise report versions like North America, Europe, or Asia.

If you have any special requirements, please let us know and we will offer you the report as per your requirements.

Lastly, this report provides market intelligence most comprehensively. The report structure has been kept such that it offers maximum business value. It provides critical insights into market dynamics and will enable strategic decision-making for existing market players as well as those willing to enter the market.

Other Trending Reports:

IP Telephony Market
Router Market

About Us:

Allied Market Research (AMR) is a market research and business-consulting firm of Allied Analytics LLP, based in Portland, Oregon. AMR offers market research reports, business solutions, consulting services, and insights on markets across 11 industry verticals. Adopting extensive research methodologies, AMR is instrumental in helping its clients to make strategic business decisions and achieve sustainable growth in their market domains. We are equipped with skilled analysts and experts and have a wide experience of working with many Fortune 500 companies and small & medium enterprises.

Pawan Kumar, the CEO of Allied Market Research, is leading the organization toward providing high-quality data and insights. We are in professional corporate relations with various companies. This helps us dig out market data that helps us generate accurate research data tables and confirm utmost accuracy in our market forecasting. Every data company in the domain is concerned. Our secondary data procurement methodology includes deep presented in the reports published by us is extracted through primary interviews with top officials from leading online and offline research and discussion with knowledgeable professionals and analysts in the industry.

Contact:
David Correa
1209 Orange Street,
Corporation Trust Center,
Wilmington, New Castle,
Delaware 19801 USA.
Int’l: +1-503-894-6022
Toll Free: +1-800-792-5285
UK: +44-845-528-1300
India (Pune): +91-20-66346060
Fax: +1-800-792-5285
help@alliedmarketresearch.com

David Correa
Allied Market Research
+ 1800-792-5285
email us here
Visit us on social media:
LinkedIn
Facebook
YouTube
X

Legal Disclaimer:

EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


What Makes Atlas the Core Driver of MongoDB’s Revenue Growth? – July 3, 2025 – Zacks

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

We use cookies to understand how you use our site and to improve your experience.

This includes personalizing content and advertising.

By pressing “Accept All” or closing out of this banner, you consent to the use of all cookies and similar technologies and the sharing of information they collect with third parties.

You can reject marketing cookies by pressing “Deny Optional,” but we still use essential, performance, and functional cookies.

In addition, whether you “Accept All,” Deny Optional,” click the X or otherwise continue to use the site, you accept our Privacy Policy and Terms of Service, revised from time to time.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Two Tools To Elevate Your MongoDB Experience – I Programmer

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The tools contradict each other; the first one allows you to write SQL instead of using Mongo’s special syntax, while the other allows you to manipulate the database without having to write SQL and by just employing natural language.

The first one, Queryleaf, translates SQL queries into MongoDB commands. It parses SQL using node-sql-parser,
transforms it into an abstract command set, and then executes those commands against the MongoDB Node.js driver .

It supports both simple SQL statements like insert, update, select and delete, but can also do more advanced querying, such as handling nested field access, arrays, aggregates, etc.

So for instance the following MongoDB query:

db.collection(‘users’).find(
{ age: { $gt: 21 } },
{ name: 1, email: 1 }
)

Will be translated into SQL as:

SELECT name, email FROM users WHERE age > 21

So far so good, but the real magic takes place when accessing nested field with SQL only:

MongoDB:

db.collection(‘users’).find(
{ ‘address.city’: ‘New York’ },
{ name: 1, ‘address.city’: 1, ‘address.zip’: 1 }
)

SQL:

SELECT name, address.city, address.zip FROM users WHERE address.city = ‘New York’

Note that Queryleaf is multifaced. It’s a library, as such it can be called from your code, it’s avalable as a CLI for doing command-line SQL queries, a Web Server for REST API access and a PostgreSQL Wire Protocol Server for connecting with standard PostgreSQL clients.

The lesson here, you can never rule SQL out. But while the push of SQL as a universal protocol that unifies access to all kinds of services and from all kind of tools, as we examined in “Tailpipe – The Log Interrogation Game Changer” where SQL was used to manipulate access logs, the new counterpart of the Agentic era is challenging that position. Which brings us to the other tool ScoutDB, an Agentic Mongo GUI.

With ScoutDB, Instead of writing queries in Mongodb syntax or SQL, you simply describe what you’re looking for in plain English and let it be translated into MongoDB queries. That way you eliminate the need to remember the exact syntax of collections and structures.

Of course as in other text-to-SQL tools, it first needs to understand your database schema in order to infer the correct queries. For that reason, ScoutDB automatically maps the relationships between your collections, understanding how your data interconnects even when those relationships aren’t explicitly defined in your schema.

So which option will you go for? If you are a developer who knows his way around his database schema, uses SQL, can write optimal queries and at the same time doesn’t want to learn the MongoDB native query syntax, then Queryleaf is for you. Plus it’s free and open source.

If on the other hand you are an Enterprise user who wants to write a report the easiest way possible, then Scout DB is for you. I’m emphasizing “Enterprise” user because the tool is not free, with the pricing scheme TBA. That, however, doesn’t rule out a  potential free plan.

More Information

Queryleaf

ScoutDB

Related Articles

Tailpipe – The Log Interrogation Game Changer

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner

pico book

Comments

or email your comment to: comments@i-programmer.info

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Databricks Contributes Spark Declarative Pipelines to Apache Spark

MMS Founder
MMS Patrick Farry

Article originally posted on InfoQ. Visit InfoQ

At the Databricks Data+AI Summit, held in San Francisco, USA, from June 10 to 12, Databricks announced that it is contributing the technology behind Delta Live Tables (DLT) to the Apache Spark project, where it will be called Spark Declarative Pipelines. This move will make it easier for Spark users to develop and maintain streaming pipelines, and furthers Databrick’s commitment to open source.

The new feature will allow developers to define data streaming pipelines without needing to create the usual imperative commands in Spark. While the changes simplify the task of writing and maintaining pipeline code, users will still need to understand the runtime behavior of Spark and be able to troubleshoot issues such as performance and correctness.

In a blog post that describes the new feature, Databricks wrote that pipelines could be defined using SQL syntax or via a simple Python SDK that declares the stream data sources, tables and their relationship, rather than writing imperative Spark commands. The company claims this will reduce the need for orchestrators such as Apache Airflow to manage pipelines.

Behind the scenes, the framework interprets the query then creates a dependency graph and optimized execution plan.

Declarative Pipelines supports streaming tables from stream data sources such as Apache Kafka topics, and materialized views for storing aggregates and results. The materialized views are updated automatically as new data arrives from the streaming tables.

Databricks provide an overview of the SQL syntax in their documentation. An excerpt is shown here. The example is based on the New York City TLC Trip Record Data data set.

-- Bronze layer: Raw data ingestion
CREATE OR REFRESH STREAMING TABLE taxi_raw_records 
(CONSTRAINT valid_distance EXPECT (trip_distance > 0.0) ON VIOLATION DROP ROW)
AS SELECT *
FROM STREAM(samples.nyctaxi.trips);

-- Silver layer 1: Flagged rides
CREATE OR REFRESH STREAMING TABLE flagged_rides 
AS SELECT
  date_trunc("week", tpep_pickup_datetime) as week,
  pickup_zip as zip, 
  fare_amount, trip_distance
FROM
  STREAM(LIVE.taxi_raw_records)
WHERE ((pickup_zip = dropoff_zip AND fare_amount > 50) OR
       (trip_distance  50));

The example shows how a pipeline can be built by defining streams, with the CREATE STREAMING TABLE command, and then consuming them with a FROM statement in subsequent queries.. Of note in the example is the ability to include data quality checks in the pipeline with the syntax CONSTRAIN … EXPECT … ON VIOLATION.

While the Apache Spark changes are not yet released, many articles already describe the experience of engineers using Databricks DLT. In an article in Medium titled “Why I Liked Delta Live Tables in Databricks,” Mariusz Kujawski describes the features of DLT and how they can best be used: “With DLT, you can build an ingestion pipeline in just a few hours, compared to the days required to develop a custom framework. Additionally, built-in data quality enforcement provides an extra layer of reliability.”

In addition to a declarative syntax for defining a pipeline, Spark Declarative Pipelines also supports change data capture (CDC), batch and stream logic, built in retry logic, and observability hooks.

Declarative pipelines are in the process of being merged into the Spark project. The feature is planned for the next Spark Release, 4.10, which is expected in January 2026. Progress can be followed on the Apache Jira Spark project in ticket SPARK-51727.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


3 Volatile Stocks in the Doghouse – StockStory

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

<br /> 3 Volatile Stocks in the Doghouse<br />

*]:w-full [&>*]:flex [&>*]:flex-col [&>*]:grow”>

Cover image

MDB (©StockStory)


Adam Hejl /

2025/07/03 12:32 am EDT


Volatility cuts both ways – while it creates opportunities, it also increases risk, making sharp declines just as likely as big gains.
This unpredictability can shake out even the most experienced investors.

Navigating these stocks isn’t easy, which is why StockStory helps you find Comfort In Chaos. Keeping that in mind, here are three volatile stocks best left to the gamblers and some better opportunities instead.

MongoDB (MDB)

Rolling One-Year Beta: 1.74

Started in 2007 by the team behind Google’s ad platform, DoubleClick, MongoDB offers database-as-a-service that helps companies store large volumes of semi-structured data.

Why Do We Think Twice About MDB?

  1. Historical operating margin losses show it had an inefficient cost structure while scaling
  2. Lacking free cash flow generation means it has few chances to reinvest for growth, repurchase shares, or distribute capital

MongoDB’s stock price of $205.25 implies a valuation ratio of 7x forward price-to-sales. Read our free research report to see why you should think twice about including MDB in your portfolio.

Carrier Global (CARR)

Rolling One-Year Beta: 1.20

Founded by the inventor of air conditioning, Carrier Global (NYSE:CARR) manufactures heating, ventilation, air conditioning, and refrigeration products.

Why Are We Hesitant About CARR?

  1. Organic revenue growth fell short of our benchmarks over the past two years and implies it may need to improve its products, pricing, or go-to-market strategy
  2. Free cash flow margin shrank by 5.9 percentage points over the last five years, suggesting the company is consuming more capital to stay competitive
  3. Diminishing returns on capital suggest its earlier profit pools are drying up

Carrier Global is trading at $75 per share, or 24.5x forward P/E. To fully understand why you should be careful with CARR, check out our full research report (it’s free).

Crane (CR)

Rolling One-Year Beta: 1.23

Based in Connecticut, Crane (NYSE:CR) is a diversified manufacturer of engineered industrial products, including fluid handling, and aerospace technologies.

Why Do We Steer Clear of CR?

  1. Absence of organic revenue growth over the past two years suggests it may have to lean into acquisitions to drive its expansion
  2. Demand will likely be soft over the next 12 months as Wall Street’s estimates imply tepid growth of 6.3%
  3. Earnings per share were flat over the last five years and fell short of the peer group average

At $192.42 per share, Crane trades at 34x forward P/E. Dive into our free research report to see why there are better opportunities than CR.

Stocks We Like More

The market surged in 2024 and reached record highs after Donald Trump’s presidential victory in November, but questions about new economic policies are adding much uncertainty for 2025.

While the crowd speculates what might happen next, we’re homing in on the companies that can succeed regardless of the political or macroeconomic environment.
Put yourself in the driver’s seat and build a durable portfolio by checking out our Top 5 Growth Stocks for this month. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025).

Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-micro-cap company Kadant (+351% five-year return). Find your next big winner with StockStory today for free. Find your next big winner with StockStory today. Find your next big winner with StockStory today

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.