A Collaborative Approach to Web Applications Accessibility

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Developers and designers can work together to share knowledge and experience when working on creating accessible applications. Accessibility issues can be treated as any other bug, something that needs to be solved first. Accessibility should be embraced as something very serious and important to society, and approached as a business opportunity.

Kjersti Krokmogen and Frank Dahle spoke about a collaborative approach to accessibility at NDC Oslo 2023.

Accessibility tends not to be as highly prioritised as it should be, Dahle said. Backlogs are endless and sales departments push other feature requests in order to be competitive:

As long as the accessibility requirements are the same for all vendors, and that accessibility is rewarded somehow, I believe the industry will adapt and deliver what’s expected of them.

At the Norwegian labor inspection authority, accessibility issues are treated as any other bug, Krokmogen said. This implies that it’s a higher priority than if it is a feature, user story, or any other task:

After bugs are revealed, we stop production of new features until all the bugs are cleared. A bug could be that the semantics of the header tags didn’t follow the right order.

Krokmogen mentioned that they learned to use screen readers and how to navigate the platform using a keyboard only. They also used the Chrome extension Wave, which gave visual feedback and indications of what their issues were.

Using these tools made it much easier to both reveal the bugs and figure out how to prioritise them, Krokmogen explained:

For example, Wave automatically sorts the findings into different categories – including errors, alerts, contrasts, and so on. To get you started, start with the error category, then continue with the alerts, and systematically work your way through those findings.

Communication and knowledge sharing is key, Krokmogen said. Working on accessibility has challenged her team to learn more about each other’s domains:

I as a developer learned the workflow and tools that the designers use, and how to use them to benefit me as a developer. The designer learned to read my html and CSS to create a more clear bug report for me as a developer. This forced us to find a common language that everyone understands.

They realised that, if we focus on accessibility when we create something new, that’s a better and more efficient approach than focusing on it “when we have time”, Krokmogen said.

Dahle suggested that software organizations should embrace accessibility as something very serious and important to society, and approach it as a business opportunity. Integrate accessibility in the whole value chain, beginning with recruitment and all the way to testing and marketing. And then, make time for the people to practice. Theory alone is not enough, Dahle said.

InfoQ interviewed Kjersti Krokmogen and Frank Dahle about accessibility.

InfoQ: What challenges do companies face when their software systems need to be accessible?

Frank Dahle: The customer market needs to realize that, when purchasing software that is required to be accessible, they always bear the responsibility, towards their users and the authorities. So, they need to ensure that all the necessary requirements are properly included in the contract, before they sign it. The software company does not bear any responsibility regarding accessibility, if it’s not stated in the contract.

When hiring consultants it’s about competence and experience. But again, customers need to be clear on their expectations upfront, before hiring. Some larger actors with in-house environments do a lot of educating themselves. That’s excellent, but still, they should require a certain level of competence from consultants.

On our side, the consultancy companies, we need to embrace this as both necessary and an opportunity. We should encourage our customers to put the bar higher and raise their expectations. And, we should follow up by educating our consultants to be experts and advocates of accessibility.

InfoQ: How did the approach that you took for accessibility work out?

Kjersti Krokmogen: It definitely helped to change my mindset. Not considering an accessibility issue as something we do if we have time, but as an actual bug. That makes it much easier to prioritise.

As for my daily routine, it’s still a journey. I have to actively remind myself that new features are not allowed to go to production until they’re tested and approved as accessible. And I hope that one day this will just be an integrated part of my daily routine.

InfoQ: What skills are needed and how can we develop them?

Dahle: It’s very much about practicing solving problems. That’s much more important than memorizing WCAG criteria, methods and theories. Designers and developers should practice together. Accessibility demands a highly interdisciplinary approach.

The technical requirements only get you half the way to an accessible system. The other half is about the human factor. Layout, icons, fonts, and language are some of the “soft” parts which are equally critical to an accessible system, and need to be assessed manually, by expert eyes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Expands Its Cloud Mac Minis Offering With M2 Pro Mac Instances

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS recently announced the general availability of Amazon EC2 M2 Pro Mac instances (mac2-m2pro.metal) as a virtual Mac offering on its Elastic Computing Cloud (EC2).

The EC2 M2 Pro Mac instance type is a 2022 Apple M2 Pro Mini computer with a 12-core CPU, 19-core GPU, 32 GiB of memory, and 16-core Apple Neural Engines. In addition, this instance type is supported by Amazon Nitro systems through high-speed Thunderbolt connections.

EC2 M2 Pro Mac instances offer users Mac mini computers as fully integrated and managed compute instances with up to 10 Gbps of Amazon VPC network bandwidth and up to 8 Gbps of Amazon EBS storage bandwidth.

Under the hood, the EC2 Mac instance connects a Nitro controller via the Mac’s Thunderbolt connection. When users launch a Mac instance, their Mac-compatible Amazon Machine Image (AMI) runs directly on the Mac Mini, with no hypervisor. In addition, the Nitro controller sets up the instance and provides secure access to the network and any storage attached. And finally, the Mac Mini can natively use any AWS service.

The EC2 Mac instances require a Dedicated Host; hence, users first need to allocate a host to their account and then launch the instance onto the host. Subsequently, they can launch a Mac instance using the AWS Management Console or the AWS CLI.

Allocate Dedicated Host via Console (Source: AWS News blog post)

A respondent in a Hacker News thread commented on what use case the M2 Pro Mac instances are helpful given the pricing:

I would expect these are more likely to be used either for testing or for very niche applications and would never be expected to be competitive for general-purpose workloads. So, I’m not too surprised by the cost.

And in another Hacker News thread, a respondent mentions another potential use case:

I think one of the possible use cases is for corporations to provide “cloud workstations” that cannot be physically stolen, from which the untrusted employee cannot steal files, that are in the correct country even if the employee is on a business trip, that have automated backups by the virtue of being on EBS. Yes, this is expensive, but I have seen this on non-Mac with the motivation of “protecting the company IP.

While Channy Yun, a Principal Developer Advocate for AWS, in an AWS news blog post wrote:

Many customers take advantage of EC2 Mac instances to deliver a complete end-to-end build pipeline on macOS on AWS. With EC2 Mac instances, they can scale their iOS build fleet; easily use custom macOS environments with AMIs; and debug any build or test failures with fully reproducible macOS environments.

Lastly, the Amazon EC2 M2 Pro Mac instances are available in the US West (Oregon) and US East (Ohio) AWS Regions, with additional regions coming soon. Pricing-wise, EC2 Mac instances are available for purchase as Dedicated Hosts through On Demand and Savings Plans pricing models.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NoSQL Data Modeling Mistakes that Ruin Performance – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Data / Operations / Storage“><meta name="x-tns-authors" content="“>

NoSQL Data Modeling Mistakes that Ruin Performance – The New Stack

Which agile methodology should junior developers learn?

Agile methodology breaks projects into sprints, emphasizing continuous collaboration and improvement.

Scrum

0%

Kanban

0%

Scrumban (a combination of Scrum and Kanban)

0%

Extreme Programming (XP)

0%

Other methodology

0%

Bah, Waterfall was good enough for my elders, it is good enough for me

0%

Junior devs shouldn’t think about development methodologies.

0%

2023-09-27 09:00:16

NoSQL Data Modeling Mistakes that Ruin Performance

sponsor-scylladb,sponsored-post-contributed,

Even if you adopt the fastest database on powerful infrastructure, you won’t be able to tap its full potential unless you get your data modeling right.


Sep 27th, 2023 9:00am by


Featued image for: NoSQL Data Modeling Mistakes that Ruin Performance

Getting your data modeling wrong is one of the easiest ways to ruin your performance. And it’s especially easy to screw this up when you’re working with NoSQL, which (ironically) tends to be used for the most performance-sensitive workloads. NoSQL data modeling might initially appear quite simple: just model your data to suit your application’s access patterns. But in practice, that’s much easier said than done.

Fixing data modeling is no fun, but it’s often a necessary evil. If your data modeling is fundamentally inefficient, your performance will suffer once you scale to some tipping point that varies based on your specific workload and deployment. Even if you adopt the fastest database on the most powerful infrastructure, you won’t be able to tap its full potential unless you get your data modeling right.

This article explores three of the most common ways to ruin your NoSQL database performance, along with tips on how to avoid or resolve them.

Not Addressing Large Partitions

Large partitions commonly emerge as teams scale their distributed databases. Large partitions are partitions that grow too big, up to the point when they start introducing performance problems across the cluster’s replicas.

One of the questions that we hear often — at least once a month — is “What constitutes a large partition?” Well, it depends. Some things to consider:

  • Latency expectations:  The larger your partition grows, the longer it will take to be retrieved. Consider your page size and the number of client-server round trips needed to fully scan a partition.
  • Average payload size: Larger payloads generally lead to higher latency. They require more server-side processing time for serialization and deserialization and also incur a higher network data transmission overhead.
  • Workload needs: Some workloads organically require larger payloads than others. For instance, I’ve worked with a Web3 blockchain company that would store several transactions as BLOBs under a single key, and every key could easily get past 1 megabyte in size.
  • How you read from these partitions: For example, a time series use case will typically have a timestamp clustering component. In that case, reading from a specific time window will retrieve much less data than if you were to scan the entire partition.

The following table illustrates the impact of large partitions under different payload sizes, such as 1, 2 and 4 kilobytes.

As you can see, the higher your payload gets under the same row count, the larger your partition is going to be. However, if your use case frequently requires scanning partitions as a whole, then be aware that databases have limits to prevent unbounded memory consumption.

For example, ScyllaDB cuts off pages at every 1MB to prevent the system from potentially running out of memory. Other databases (even relational ones) have similar protection mechanisms to prevent an unbounded bad query from starving the database resources.

To retrieve a payload size of 4KB and 10K rows with ScyllaDB, you would need to retrieve at least 40 pages to scan the partition with a single query. This may not seem a big deal at first. However, as you scale over time, it could affect your overall client-side tail latency.

Another consideration: With databases like ScyllaDB and Cassandra, data written to the database is stored in the commit log and under an in-memory data structure called a “memtable.”

The commit log is a write-ahead log that is never really read from, except when there’s a server crash or a service interruption. Since the memtable lives in memory, it eventually gets full. To free up memory space, the database flushes memtables to disk. That process results in SSTables (sorted strings tables), which is how your data gets persisted.

What does all this have to do with large partitions? Well, SSTables have specific components that need to be held in memory when the database starts. This ensures that reads are always efficient and minimizes wasting storage disk I/O when looking for data. When you have extremely large partitions (for example, we recently had a user with a 2.5 terabyte partition in ScyllaDB), these SSTable components introduce heavy memory pressure, therefore shrinking the database’s room for caching and further constraining your latencies.

How do you address large partitions via data modeling? Basically, it’s time to rethink your primary key. The primary key determines how your data will be distributed across the cluster, which improves performance as well as resource utilization.

A good partition key should have high cardinality and roughly even distribution. For example, a high cardinality attribute like User Name, User ID or Sensor ID might be a good partition key. Something like State would be a bad choice because states like California and Texas are likely to have more data than less populated states such as Wyoming and Vermont.

Or consider this example. The following table could be used in a distributed air quality monitoring system with multiple sensors:

With time being our table’s clustering key, it’s easy to imagine that partitions for each sensor can grow very large, especially if data is gathered every couple of milliseconds. This innocent-looking table can eventually become unusable. In this example, it takes only ~50 days.

A standard solution is to amend the data model to reduce the number of clustering keys per partition key. In this case, let’s take a look at the updated air_quality_data table:

After the change, one partition holds the values gathered in a single day, which makes it less likely to overflow. This technique is called bucketing, as it allows us to control how much data is stored in partitions.

Bonus: See how Discord applies the same bucketing technique to avoid large partitions.

Introducing Hot Spots

Hot spots can be a side effect of large partitions. If you have a large partition (storing a large portion of your data set), it’s quite likely that your application access patterns will hit that partition more frequently than others. In that case, it also becomes a hot spot.

Hot spots occur whenever a problematic data access pattern causes an imbalance in the way data is accessed in your cluster. One culprit: when the application fails to impose any limits on the client side and allows tenants to potentially spam a given key.

For example, think about bots in a messaging app frequently spamming messages in a channel. Hot spots could also be introduced by erratic client-side configurations in the form of retry storms. That is, a client attempts to query specific data, times out before the database does and retries the query while the database is still processing the previous one.

Monitoring dashboards should make it simple for you to find hot spots in your cluster. For example, this dashboard shows that shard 20 is overwhelmed with reads.

For another example, the following graph shows three shards with higher utilization, which correlates to the replication factor of three, configured for the keyspace in question.

Here, shard 7 introduces a much higher load due to the spamming.

How do you address hot spots? First, use a vendor utility on one of the affected nodes to sample which keys are most frequently hit during your sampling period. You can also use tracing, such as probabilistic tracing, to analyze which queries are hitting which shards and then act from there.

If you find hot spots, consider:

  • Reviewing your application access patterns. You might find that you need a data modeling change such as the previously-mentioned bucketing technique. If you need sorting, you could use a monotonically increasing component, such as Snowflake. Or, maybe it’s best to apply a concurrency limiter and throttle down potential bad actors.
  • Specifying per-partition rate limits, after which the database will reject any queries that hit that same partition.
  • Ensuring that your client-side timeouts are higher than the server-side timeouts to prevent clients from retrying queries before the server has a chance to process them (“retry storms”).

Misusing Collections

Teams don’t always use collections, but when they do, they often use them incorrectly. Collections are meant for storing/denormalizing a relatively small amount of data. They’re essentially stored in a single cell, which can make serialization/deserialization extremely expensive.

When you use collections, you can define whether the field in question is frozen or non-frozen. A frozen collection can only be written as a whole; you cannot append or remove elements from it. A non-frozen collection can be appended to, and that’s exactly the type of collection that people most misuse. To make matters worse, you can even have nested collections, such as a map that contains another map, which includes a list, and so on.

Misused collections will introduce performance problems much sooner than large partitions, for example. If you care about performance, collections can’t be very large at all. For example, if we create a simple key:value table, where our key is a sensor_id and our value is a collection of samples recorded over time, our performance will be suboptimal as soon as we start ingesting data.

The following monitoring snapshots show what happens when you try to append several items to a collection at once.

You can see that while the throughput decreases, the p99 latency increases. Why does this occur?

  • Collection cells are stored in memory as sorted vectors.
  • Adding elements requires a merge of two collections (old and new).
  • Adding an element has a cost proportional to the size of the entire collection.
  • Trees (instead of vectors) would improve the performance, BUT…
  • Trees would make small collections less efficient!

Returning that same example, the solution would be to move the timestamp to a clustering key and transform the map into a frozen collection (since you no longer need to append data to it). These very simple changes will greatly improve the performance of the use case.

Learn More: On-Demand NoSQL Data Modeling Masterclass

Want to learn more about NoSQL data modeling best practices for performance? Take a look at our NoSQL data modeling masterclass — three hours of expert instruction, now on demand and free. You will learn how to:

  • Analyze your application’s data usage patterns and determine which data modeling approach will be most performant for your specific usage patterns.
  • Select the appropriate data modeling options to address a broad range of technical challenges, including the benefits and trade-offs of each option.
  • Apply common NoSQL data modeling strategies in the context of a sample application.
  • Identify signs that indicate your data modeling is at risk of causing hot spots, timeouts and performance degradation — and how to recover.

Group
Created with Sketch.

TNS owner Insight Partners is an investor in: Pragma.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Investor Sentiment and Options Activity Surrounding MongoDB – Best Stocks

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

As of September 27, 2023, a significant investor with substantial funds has displayed a notably pessimistic outlook on MongoDB, as reported in an article. Intriguingly, the options history for MongoDB (NASDAQ:MDB) reveals the presence of 13 unusual trades. Among these trades, 6 involve puts, amounting to a total value of $405,580, while 7 entail calls, totaling $1,679,486. These trades, combined with the Volume and Open Interest on the contracts, indicate that influential investors have been focusing on a price range between $300.0 and $410.0 for MongoDB over the past three months. At present, MDB’s price stands at $328.73, experiencing a 1.05% increase.

MongoDB, Inc.

MDB

Buy

Updated on: 27/09/2023

Price Target

Current $329.40

Concensus $388.06


Low $180.00

Median $406.50

High $630.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Miller Jump
Truist Financial
Buy
Mike Cikos
Needham
Buy
Rishi Jaluria
RBC Capital
Sell
Ittai Kidron
Oppenheimer
Sell
Matthew Broome
Mizuho Securities
Sell

Show more

MongoDB Inc. (MDB) Stock Performance and Forecast: Mixed Day with Promising Growth

On September 27, 2023, the stock of MongoDB Inc. (MDB) experienced a mixed performance. MDB stock opened at $326.22, slightly higher than the previous day’s closing price of $325.22. Throughout the day, the stock fluctuated within a range of $324.92 to $332.37. The trading volume for MDB on this day was 29,813 shares, which is significantly lower than the average volume of 1,659,509 shares over the past three months. MDB has a market capitalization of $24.0 billion. The company’s earnings growth in the last year was -5.89%, but this year it has seen a growth rate of +92.12%. The earnings growth forecast for the next five years is +8.00%. In terms of revenue growth, MDB experienced a significant increase of +46.95% in the last year. The P/E ratio for MDB is not provided (NM), but the price/sales ratio is 11.45 and the price/book ratio is 31.75. On September 27, MDB’s stock price had a slight decline of -0.17, resulting in a percentage change of -0.12%. Comparing MDB’s performance to other technology services companies, Take-Two Interactive had a decrease of -0.17 (-0.12%), HubSpot Inc. had an increase of +11.94 (+2.55%), Splunk Inc. had an increase of +0.91 (+0.62%), and Zscaler Inc. had a decrease of -0.67 (-0.45%). MDB is scheduled to report its next earnings on December 6, 2023, with an earnings per share forecast of $0.27. MDB reported annual revenue of $1.3 billion in the last year, but incurred a net loss of -$345.4 million, resulting in a net profit margin of -26.90%. MDB operates in the technology services sector and is involved in the packaged software industry. The company’s corporate headquarters are located in New York, New York.

MDB Stock Analysis: Median Target of $450 with Positive Consensus among Analysts

MDB stock has a median target of $450.00, with a high estimate of $500.00 and a low estimate of $250.00. The consensus among 28 polled investment analysts is to buy stock in MongoDB Inc. MongoDB Inc reported earnings per share of $0.27 for the current quarter and sales of $389.8 million. Investors can expect more information about MongoDB Inc’s financial performance on December 6. MDB stock appears to be a promising investment option, but investors should conduct their own research and consider their individual financial goals and risk tolerance.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Launches Advanced Data Management Capabilities to Run Applications Anywhere

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

LONDON, Sept. 27, 2023 — MongoDB, Inc. has announced MongoDB Atlas for the Edge, a set of capabilities that make it easier for organizations to deploy applications closer to where real-time data is generated, processed, and stored—across devices, on-premises data centers, and major cloud providers.

With MongoDB Atlas for the Edge, data is securely stored and synchronized in real time across data sources and destinations to provide highly available, resilient, and reliable applications. Organizations can now use MongoDB Atlas for the Edge to build, deploy, and manage applications that are accessible virtually anywhere for use cases like connected vehicles, smart factories, and supply chain optimization—without the complexity typically associated with operating distributed applications at the edge.

“Flexibility and abstracting away complexity is one of the key attributes of a development experience that our customers have come to expect from us,” said Sahir Azam, Chief Product Officer at MongoDB. “Atlas for the Edge delivers a consistent development experience across the data layer for applications running anywhere—from mobile devices, kiosks in retail locations, remote manufacturing facilities, and on-premises data centers all the way to the cloud. Now, customers can more easily build and manage distributed applications securely using data at the edge with high availability, resilience, and reliability—and without the complexity and heavy lifting of managing complex edge deployments.”

Advancements in edge computing offer significant opportunities for organizations to deploy distributed applications to reach end users anywhere with real-time experiences. However, many organizations today that want to take advantage of edge computing lack the technical expertise to manage the complexity of networking and high volumes of distributed data required to deliver reliable applications that run anywhere. Many edge deployments involve stitching together hardware and software solutions from multiple vendors, resulting in complex and fragile systems that are often built using legacy technology that is limited by one-way data movement and requires specialized skills to manage and operate. Further, edge devices may require constant optimization due to their constraints—like limited data storage and intermittent network access—which makes keeping operational data in sync between edge locations and the cloud difficult.

Edge deployments can also be prone to security vulnerabilities, and data stored and shared across edge locations must be encrypted in transit and at rest with centralized access management controls to ensure data privacy and compliance. As a result of this complexity, many organizations struggle to deploy and run distributed applications that can reach end users with real-time experiences wherever they are.

MongoDB Atlas for the Edge eliminates this complexity, providing capabilities to build, manage, and deploy distributed applications that can securely use real-time data in the cloud and at the edge with high availability, resilience, and reliability. Tens of thousands of customers and millions of developers today rely on MongoDB Atlas to run business-critical applications for real-time inventory management, predictive maintenance, and high-volume financial transactions.

With MongoDB Atlas for the Edge, organizations can now use a single, unified interface to deliver a consistent and frictionless development experience from the edge to the cloud—and everything in between—with the ability to build distributed applications that can process, analyze, and synchronize virtually any type of data across locations. Together, the capabilities included with MongoDB Atlas for the Edge allow organizations to significantly reduce the complexity of building, deploying, and managing the distributed data systems that are required to run modern applications anywhere:

  • Deploy MongoDB on a variety of edge infrastructure for high reliability with ultra-low latency: With MongoDB Atlas for the Edge, organizations can run applications on MongoDB using a wide variety of infrastructure, including self-managed on-premises servers, such as those in remote warehouses or hospitals, in addition to edge infrastructure managed by major cloud providers including Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
  • Run applications in locations with intermittent network connectivity: With MongoDB Atlas Edge Server and Atlas Device Sync, organizations can use a pre-built, local-first data synchronization layer for applications running on kiosks or on mobile and IoT devices to prevent data loss and improve offline application experiences. MongoDB Atlas Edge Servers can be deployed in remote locations to allow devices to sync directly with each other—without the need for connectivity to the cloud—using built-in network management capabilities.
  • Build and deploy AI-powered applications at the edge: MongoDB Atlas for the Edge provides integrations with generative AI and machine learning technologies to provide low-latency, intelligent functionality at the edge directly on devices—even when network connectivity is unavailable.
  • Store and process real-time and batch data from IoT devices to make it actionable: With MongoDB Atlas Stream Processing, organizations can ingest and process high-velocity, high-volume data from millions of IoT devices (e.g., equipment sensors, factory machinery, medical devices) in real-time streams or in batches when network connectivity is available. Data can then be easily aggregated, stored, and analyzed using MongoDB Time Series collections for use cases like predictive maintenance and anomaly detection with real-time reporting and alerting capabilities. MongoDB Atlas for the Edge provides all of the tools necessary to process and synchronize virtually any type of data across edge locations and the cloud to ensure consistency and availability.
  • Easily secure edge applications for data privacy and compliance: MongoDB Atlas for the Edge helps organizations ensure their edge deployments are secure with built-in security capabilities. The Atlas Device SDK provides out-of-the-box data encryption at rest, on devices, and in transit over networks to ensure data is protected and secure. Additionally, Atlas Device Sync provides fine-grained role-based access, with built-in identity and access management (IAM) capabilities that can also be combined with third-party IAM services to easily integrate edge deployments with existing security and compliance solutions.

“High reliability and ultra-low latency are key requirements that impact customers’ ability to access and process their data. This is where AWS’s edge services help meet customers’ data-intensive workload needs,” said Amir Rao, Director of Product Management for Telco at AWS. “With MongoDB Atlas for the Edge, customers can take advantage of managed edge infrastructure like AWS Local Zones, AWS Wavelength, and AWS Outposts to process data closer to end users and power applications across generative AI and machine learning, IoT, and robotics—making it easier for them to build, manage, and deploy their applications anywhere.”

About MongoDB Atlas

MongoDB Atlas is the leading multi-cloud developer data platform that accelerates and simplifies building applications with data. MongoDB Atlas provides an integrated set of data and application services in a unified environment that enables development teams to quickly build with the performance and scale modern applications require. Tens of thousands of customers and millions of developers worldwide rely on MongoDB Atlas every day to power their business-critical applications. To get started with MongoDB Atlas, visit mongodb.com/atlas.

About MongoDB

Headquartered in New York, MongoDB‘s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, our developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses.


Source: MongoDB

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Unusual Options Activity For September 27 – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

A whale with a lot of money to spend has taken a noticeably bearish stance on MongoDB.

Looking at options history for MongoDB MDB we detected 13 strange trades.

If we consider the specifics of each trade, it is accurate to state that 46% of the investors opened trades with bullish expectations and 53% with bearish.

From the overall spotted trades, 6 are puts, for a total amount of $405,580 and 7, calls, for a total amount of $1,679,486.

What’s The Price Target?

Taking into account the Volume and Open Interest on these contracts, it appears that whales have been targeting a price range from $300.0 to $410.0 for MongoDB over the last 3 months.

Volume & Open Interest Development

Looking at the volume and open interest is a powerful move while trading options. This data can help you track the liquidity and interest for MongoDB’s options for a given strike price. Below, we can observe the evolution of the volume and open interest of calls and puts, respectively, for all of MongoDB’s whale trades within a strike price range from $300.0 to $410.0 in the last 30 days.

MongoDB Option Volume And Open Interest Over Last 30 Days

Biggest Options Spotted:

Symbol PUT/CALL Trade Type Sentiment Exp. Date Strike Price Total Trade Price Open Interest Volume
MDB CALL TRADE BEARISH 01/17/25 $400.00 $1.3M 88 200
MDB CALL TRADE BULLISH 01/19/24 $330.00 $103.3K 251 52
MDB CALL TRADE BULLISH 01/19/24 $330.00 $102.7K 251 27
MDB PUT TRADE BULLISH 12/15/23 $410.00 $86.8K 102 10
MDB PUT SWEEP BEARISH 11/17/23 $300.00 $78.2K 1.1K 74

Where Is MongoDB Standing Right Now?

  • With a volume of 448,914, the price of MDB is up 1.05% at $328.73.
  • RSI indicators hint that the underlying stock is currently neutral between overbought and oversold.
  • Next earnings are expected to be released in 69 days.

What The Experts Say On MongoDB:

  • Argus Research has decided to maintain their Buy rating on MongoDB, which currently sits at a price target of $484.
  • Barclays has decided to maintain their Overweight rating on MongoDB, which currently sits at a price target of $450.
  • Needham has decided to maintain their Buy rating on MongoDB, which currently sits at a price target of $445.
  • Piper Sandler has decided to maintain their Overweight rating on MongoDB, which currently sits at a price target of $425.
  • UBS has decided to maintain their Buy rating on MongoDB, which currently sits at a price target of $465.

Options are a riskier asset compared to just trading the stock, but they have higher profit potential. Serious options traders manage this risk by educating themselves daily, scaling in and out of trades, following more than one indicator, and following the markets closely.

If you want to stay updated on the latest options trades for MongoDB, Benzinga Pro gives you real-time options trades alerts.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB) Enhances Real-Time Data Management for Apps – September 27, 2023

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB Free Report) has unveiled MongoDB Atlas for the Edge, comprising features that simplify the deployment of applications in proximity to where real-time data is generated, processed and stored. These deployments can span across various devices, on-site data centers and major cloud platforms.

With MongoDB Atlas for the Edge, data is securely stored and continuously synchronized between data sources and destinations, ensuring the availability, resilience and reliability of applications. Organizations can now utilize MongoDB Atlas for the Edge to create, deploy and oversee applications that can be accessed from nearly anywhere.

These new features simplify the process, allowing for the development, control and deployment of distributed applications that can securely utilize real-time data both in the cloud and at the edge.

With MongoDB Atlas for the Edge, organizations can now use a unified interface to offer a consistent and smooth development experience from the edge to the cloud and everything in between. This includes the capability to create distributed applications that can handle, analyze and synchronize virtually any data type across various locations. It is expected to aid top-line growth as well as the number of customers in the upcoming quarter.

The Zacks Consensus Estimate for MDB’s fiscal 2024 revenues is pegged at $1.61 billion, indicating year-over-year growth of 48.34%. The Zacks Consensus Estimate for total customers is pegged at 47,871, indicating a year-over-year increase of 17.3%.

 

MongoDB Faces Tough Competition in the Data Management Market

Cloud databases provide cost-effective, flexible and scalable solutions for companies looking to simplify data management. These offer easy accessibility, quick data recovery and enhanced security features, making these cloud databases a valuable alternative for businesses seeking efficient data storage and management solutions.

According to a MMR report, the Cloud Database Market was valued at $10.13 billion in 2022 and it is expected to reach $47.93 billion by 2029, exhibiting a CAGR of 24.85% during the forecast period (2023-2029). The leading database providers include Amazon’s (AMZN Free Report) division, Amazon Web Services (“AWS”), SAP (SAP Free Report) and Oracle (ORCL Free Report) .

Amazon provides a diverse range of cloud database services, encompassing both NoSQL and Relational Database Service (RDS). Amazon RDS operates on Oracle, SQL or MySQL server instances. In contrast, Amazon SimpleDB is a schema-less database designed for handling smaller workloads. On the NoSQL side, Amazon DynamoDB stands out as a solid-state drive-powered database that autonomously replicates workloads across three separate availability zones.

SAP, a leading provider of enterprise software, has introduced a cloud database platform called HANA to enhance an organization’s on-premises database tools. SAP HANA complements various database tools like Sybase and it is accessible on the AWS cloud.

Oracle’s relational database is ideal for rapid data storage and retrieval in cloud environments. It is well-suited for storing data used in online transaction processing. Nonetheless, a significant challenge lies in securely restoring and maintaining the integrity of this data.

MongoDB, which currently carries a Zacks Rank #3 (Hold), has developed an encryption technology that gives the freedom to review the cryptographic methods and the code used in this mechanism.

You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here.

Shares of MDB have gained 65.3% year to date compared with the Zacks Computer and Technology sector’s rise of 33.1% due to the general availability of this end-to-end data encryption technology.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB adds generative AI features to boost developer productivity – InfoWorld

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

After adding vector search to its NoSQL Atlas database-as-a-service (DBaaS) in June, MongoDB is adding new generative AI features to a few tools in order to further boost developer productivity.

The new features have been added to MongoDB’s Relational Migrator, Compass, Atlas Charts tools, and its Documentation interface.

In its Documentation interface, MongoDB is adding an AI-powered chatbot that will allow developers to ask questions and receive answers about MongoDB’s products and services, in addition to providing troubleshooting support during software development.

The chatbot inside MongoDB Documentation—which has been made generally available — is an open source project that uses MongoDB Atlas Vector Search for AI-powered information retrieval of curated data to answer questions with context, MongoDB said.

Developers would be able to use the project code to build and deploy their own version of chatbots for a variety of use cases.

In order to accelerate application modernization MongoDB has integrated AI capabilities — such as intelligent data schema and code recommendations — into its Relational Migrator.

The Relational Migration can automatically convert SQL queries and stored procedures in legacy applications to MongoDB Query API syntax, the company said, adding that the automatic conversion feature eliminates the need for developers to have knowledge of MongoDB syntax.

Further, the company is adding a natural language processing capability to MongoDB Compass, which is an interface for querying, aggregating, and analyzing data stored in MongoDB.

The natural language prompt included in Compass has the ability to generate executable MongoDB Query API syntax, the company said.

A similar natural language capability has also been added to MongoDB Atlas Charts, which is a data visualization tool that allows developers to easily create, share, and embed visualizations using data stored in MongoDB Atlas.

“With new AI-powered capabilities, developers can build data visualizations, create graphics, and generate dashboards within MongoDB Atlas Charts using natural language,” the company said in a statement.

The new AI-powered features in MongoBD Relational Migrator, MongoDB Compass, and MongoDB Atlas Charts are currently in preview.

In addition to these updates, the company has also released a new set of capabilities to help developers deploy MongoDB at the edge in order to garner more real-time data and help build AI-powered applications at the edge location.

Dubbed MongoDB Atlas for the Edge, these capabilities will allow enterprises to run applications on MongoDB using a wide variety of infrastructure, including self-managed on-premises servers and edge infrastructure managed by major cloud providers, including Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, the company said.

Next read this:

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB takes Atlas to the Edge, adds AI tools for devs – The Stack

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB’s .local event in London this week lived up to its name with a flurry of new features that let developers build and and test applications locally in self-managed environments – with the company also boosting its ability to run the MongoDB Atlas managed cloud Database-as-a-Service (DBaaS) locally at the “Edge” in remote/poor bandwidth locations.

At the latest stop on its global tour of .local events, MongoDB also touted a flurry of new AI features, including a chatbot that lets developers ask questions of its documentation in natural language. Also particularly welcomed by customers at the event was a new AI tool that lets its MongoDB Relational Migrator convert SQL to MongoDB Query API syntax automatically for swift migrations off relational databases.

MongoDB Atlas at the Edge

Unveiling the features at the London leg of its developer conference series, the software company said the Atlas at the Edge capabilities will make it easier for organisations to deploy applications closer to where real-time data is generated, processed and stored; whether that’s devices, on-premises data centres or major cloud providers.

(MongoDB’s flexible schema has made it popular with over 44,000 customers globally. Unlike SQL databases, for which you need to determine and declare a table’s schema before inserting data, MongoDB does not require documents to have the same schema; i.e The documents in a single “collection” – the equivalent of an RDBMS table – do not need to have the same set of fields. and the data type for a field can differ. Its multi-cloud managed DBaaS Atlas now accounts for over 60% of revenue and is a $1 billion+ ARR business meanwhile.)

MongoDB’s Chief Product Officer, Sahir Azam, said in a keynote: “You can now deploy Atlas edge servers anywhere…that can be a factory, an oil rig, a mobile device or a sensor, all with seamless experience and integration. This allows you to have ultra-low latency compute where the data is generated. And… process that information locally.”

Azam added that Atlas for the Edge will allow developers to build on top of Edge servers with the standard MongoDB drivers or device SDKs.

“It’s seamless and natural: there’s nothing new to learn,” he said.

“You can extend your data layer even further to wearables, point of sale systems, etc, all dealing with conflict resolution automatically as well as intermittent connections. So if you have a mobile application that loses connectivity, you can still deliver a great experience and automatically synchronise the data when connectivity returns” he added at the event.

Exploring this more closely, The Stack spoke with Ian Ward, Atlas Edge product manager about the new private preview of Atlas Edge Server with syncing capabilities and wire protocol access; this will work by letting organisations deploy Atlas Edge Server on any infrastructure and automatically sync operational data that is essential for applications across edge devices, on-premises locations, and the cloud.

This will give developers the option to use a pre-built, local-first data synchronisation layer for applications running on kiosks or on mobile and IoT devices. It will also help prevent data loss and improve offline application experiences, while eliminating the need for organisations to develop complex custom logic, saving development effort, he said.

MongoDB Atlas at the Edge
A MongoDB.local London session in full flow. 

“We’ve brought that device synchronisation capability so that you can have an edge server deployed in your backup office in a retailer and then you have tablet devices out in the storefront that then synchronise to that backup office automatically,” Ward told The Stack.

“And that enables them to go offline, enables them to share data, and they come back online to resolve any conflicts.”

He added that while this will empower device synchronisation use cases, it’ll also offer other benefits that device sync provides, such as conflict resolution, disconnection tolerance and real-time features.

“This is something that building a traditional MongoDB app server stack – such as the MERN stack that we all know and love – could also benefit from by enabling this capability on the edge server,” Ward added.

Edge devices powered by low-latency AI

With AI generating interest like no other technology, it’s hardly surprising that MongoDB’s Atlas for the Edge offers integrations for generative AI capabilities that it said will provide low-latency, intelligent functionality at the edge directly on devices; even when network connectivity is unavailable. For example, MongoDB’s Atlas Search and newly released Atlas Vector Search tools will be able to simplify the creation of intelligent applications with search and AI capabilities that use large language model vector embeddings – the building blocks of large language models. These embeddings can be generated and stored in MongoDB Atlas and then used by edge applications powered by the Atlas Device SDK.

This platform enables the development of real-time mobile apps and facilitates tasks like image similarity searches and product defect identification. Developers can then employ the Atlas Device SDK to create, train, deploy and manage machine learning models on edge devices, using frameworks like CoreML, TensorFlow, and PyTorch for custom applications that make use of real-time data.

MongoDB Atlas for the Edge is infrastructure-agnostic and can be deployed on any server of an organisation’s choosing to simplify the synchronisation of data across sources and to MongoDB Atlas, it said.

While customers can take advantage of managed edge infrastructure like AWS Local Zones, AWS Wavelength, and AWS Outposts to process data closer to end users, it’s also available on infrastructure from other cloud providers, including Azure Stack Edge or Google Anthos.

MongoDB’s VP of Developer Relations, Matt Asay, believes the new edge capabilities will make a big difference for its customers moving forward: “Take Vodafone, for example. They’re adding a million new devices every month with all sorts of different shapes that have data stored in different formats and they need something that simplifies that,” he told The Stack in an interview after the firm’s keynote announcement.

“That’s the sort of thing that we’re seeing across all of our customers… the Edge news will have a big impact because it will help us to work with our customers across a wider variety of use cases.”

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AE Wealth Management LLC Increases Stake in MongoDB, Inc. as It Reports Better-Than …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

On September 26, 2023, it was reported that AE Wealth Management LLC has acquired a new position in MongoDB, Inc. (NASDAQ:MDB) during the second quarter. According to its recent filing with the Securities and Exchange Commission (SEC), the firm purchased 3,149 shares of the company’s stock, valued at approximately $1,294,000.

MongoDB (NASDAQ:MDB) recently released its quarterly earnings results on August 31st. The company reported earnings per share (EPS) of ($0.63) for the quarter, surpassing analysts’ consensus estimates of ($0.70) by $0.07. Additionally, MongoDB generated $423.79 million in revenue during the quarter, exceeding analyst expectations of $389.93 million.

Despite these positive performance indicators, the company had a negative return on equity of 29.69% and a negative net margin of 16.21%. Researchers predict that MongoDB will post -2.17 earnings per share for the current year.

Several equities research analysts have issued reports on MDB shares. In one research note issued on September 1st, Oppenheimer increased their price target on MongoDB from $430.00 to $480.00 and gave the stock an “outperform” rating.

Furthermore, 22nd Century Group maintained their rating on MongoDB shares as well in a report published on June 26th.

Meanwhile, Robert W. Baird raised their price objective from $390.00 to $430.00 for MongoDB’s shares in a report published on June 23rd.

Royal Bank of Canada also reiterated an “outperform” rating and set a price target of $445.00 for MongoDB shares in a research report on September 1st.

Finally, William Blair also maintained an “outperform” rating for MongoDB’s shares in a research report released on June 2nd.

It is important to note that one research analyst has issued a sell rating, three have given a hold rating, and twenty-one have given a buy rating to the company’s stock. According to Bloomberg.com, the consensus rating for MongoDB is currently “Moderate Buy,” with an average price target of $418.08.

These ratings and price targets reflect the overall positive sentiment surrounding MongoDB and its potential for growth in the market.

In conclusion, AE Wealth Management LLC’s recent purchase of shares in MongoDB underscores the growing interest in the company. Despite reporting negative return on equity and net margin, MongoDB’s better-than-expected quarterly earnings results have boosted investor confidence. The various research reports further indicate optimism among analysts regarding the company’s future prospects. Investors will continue to monitor MDB closely as it navigates through the rest of the fiscal year.

MongoDB, Inc.

MDB

Buy

Updated on: 27/09/2023

Price Target

Current $325.30

Concensus $388.06


Low $180.00

Median $406.50

High $630.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Miller Jump
Truist Financial
Buy
Mike Cikos
Needham
Buy
Rishi Jaluria
RBC Capital
Sell
Ittai Kidron
Oppenheimer
Sell
Matthew Broome
Mizuho Securities
Sell

Show more

MongoDB Inc. Attracts Attention and Confidence from Institutional Investors and Hedge Funds


MongoDB Inc., a leading provider of modern, general-purpose database platforms, has attracted significant attention from institutional investors and hedge funds. Recent reports reveal that several major firms have modified their holdings of the company’s stock, indicating a growing interest in MongoDB as an investment opportunity.

Notable among these investors is Bessemer Group Inc., which acquired a new position in MongoDB shares during the fourth quarter of last year. The value of this new position was approximately $29,000, suggesting Bessemer’s confidence in the company’s potential for growth and profitability. Similarly, BI Asset Management Fondsmaeglerselskab A S purchased shares of MongoDB in the same period worth around $30,000. Global Retirement Partners LLC also took notice, boosting its position in the company by 346.7% in the first quarter of this year.

Manchester Capital Management LLC joined the ranks by acquiring a stake worth $36,000 in MongoDB during the first quarter. Additionally, Clearstead Advisors LLC showed its support for the company by purchasing MongoDB shares valued at $36,000 during the same period. These investments underscore the appeal that MongoDB holds for both institutional investors and hedge funds.

It is noteworthy that hedge funds and other institutional investors currently own a significant majority (88.89%) of MongoDB’s stock. This indicates widespread recognition within these sectors regarding MongoDB’s competitive edge and long-term growth prospects.

In addition to these developments with institutional investors and hedge funds, recent news surrounding key personnel at MongoDB has also captured attention. Mark Porter, Chief Technology Officer (CTO), made headlines with his sale of 2,734 shares on July 3rd at an average price of $412.33 per share. This transaction amounted to approximately $1,127,310.22. Following this sale, Porter now holds 35,056 shares valued at roughly $14,454,640.48.

Furthermore, CEO Dev Ittycheria sold 50,000 shares of MongoDB stock on July 5th at an average price of $407.07 per share, resulting in a total transaction value of $20,353,500.00. Ittycheria currently owns 218,085 shares in the company valued at approximately $88,775,860.95.

These insider transactions provide a glimpse into the confidence that key figures within MongoDB have regarding the company’s trajectory and prospects for growth. They also offer insights into the significant stakes held by these executives, reinforcing their commitment to MongoDB’s success.

As for the current state of MongoDB’s stock, it opened at $333.31 on September 26th with a market capitalization of $23.78 billion. The stock exhibits a price-to-earnings ratio of -96.33 and has a beta of 1.11. Over the last year, its value has ranged from a low of $135.15 to a high of $439.00.

It is worth noting that MongoDB’s financial position appears robust with manageable debt levels—a debt-to-equity ratio of 1.29—and healthy liquidity indicated by current and quick ratios both standing at 4.48.

Overall, the recent activities involving institutional investors, hedge funds, and key personnel show growing interest and confidence in MongoDB as a valuable investment opportunity within the database platform sector. With its strong market presence and proven track record in delivering modern database solutions, MongoDB is positioned favorably for continued growth and success in the foreseeable future.

References:
1. https://www.sec.gov/Archives/edgar/data/1441816/000144181617000148/xslForm13F_X01/form13fData.xml
2.http://www.businesswire.com/news/home/20230425005012/en/
3.ndiodati.db-engines.com/en/ranking_trend/system/MongoDB

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.