New Azure Cosmos DB Features to Boost Performance and Optimize Cost

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has recently unveiled several new features for Azure Cosmos DB to enhance cost efficiency, boost performance, and increase elasticity. These features are burst capacity, hierarchical partition keys, serverless container storage of 1 TB, and priority-based execution.

Azure Cosmos DB is Microsoft’s globally distributed, multi-model database service that offers low-latency, scalable storage and querying of diverse data types. The burst capacity feature for the service was announced at the recent annual Build conference and is generally available (GA) to allow users to take advantage of your database or container’s idle throughput capacity to handle traffic spikes.  In addition, the company also announced the GA of hierarchical partition keys, also known as sub-partitioning for Azure Cosmos DB for NoSQL, and the expansion of the storage capacity of serverless containers to 1 TB (previously, the limit was 50 GB).

Beyond Build, the company also introduced the public preview of priority-based execution, a capability that allows users to specify the priority for the request sent to Azure Cosmos DB. When the number of requests exceeds the configured Request Units per second (RU/s) in Azure Cosmos DB, low-priority requests are throttled to prioritize the execution of high-priority requests, as determined by the user-defined priority.

Richa Gaur, a Product Manager at Microsoft, explains in a blog post:

This capability allows a user to perform more important tasks while delaying less important tasks when there are a higher number of requests than what a container with configured RU/s can handle at a given time. Less important tasks will be continuously retried by any client using an SDK based on the retry policy configured.

Using Mircosoft.Azure.Cosmos.PartitionKey; 
Using Mircosoft.Azure.Cosmos.PriorityLevel; 

//update products catalog  
RequestOptions catalogRequestOptions = new ItemRequestOptions{PriorityLevel = PriorityLevel.Low}; 

PartitionKey pk = new PartitionKey(“productId1”); 
ItemResponse catalogResponse = await this.container.CreateItemAsync(product1, pk, requestOptions); 

//Display product information to user 
RequestOptions getProductRequestOptions = new ItemRequestOptions{PriorityLevel = PriorityLevel.High}; 

string id = “productId2”; 
PartitionKey pk = new PartitionKey(id); 

ItemResponse productResponse = await this.container.ReadItemAsync(id, pk, getProductRequestOptions); 

Similarly, users can maintain performance for short, temporary bursts, as requests that otherwise would have been rate-limited (429) can now be served by burst capacity when available. In a deep dive Cosmos DB blog post, the authors explain:

With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity. This capacity can be consumed at a rate of up to 3000 RU/s. Burst capacity applies to databases and containers that use manual or autoscale throughput and have less than 3000 RU/s provisioned per physical partition.

Source: https://devblogs.microsoft.com/cosmosdb/deep-dive-new-elasticity-features/

Next to optimizing performance, the hierarchical partition keys help in elasticity for Cosmos DB. The feature can help in scenarios where users leverage synthetic partition keys or logical partition keys that can exceed 20 GB of data. They can use up to three keys with hierarchical partition keys to further sub-partition their data, enabling more optimal data distribution and a larger scale. Behind the scenes, Azure Cosmos DB will automatically distribute their data among physical partitions such that a logical partition prefix can exceed the limit of 20GB of storage.

Leonard Lobel, a Microsoft Data Platform MVP, outlines the benefit of hierarchical keys in an IoT scenario in a blog post:

You could define a hierarchical partition key based on device ID and month. Then, devices that accumulate less than 20 GB of data can store all their telemetry inside a single 20 GB partition, but devices that accumulate more data can exceed 20 GB and span multiple physical partitions, while keeping each month’s worth of data “sub-partitioned” inside individual 20 GB logical partitions. Then, querying on any device ID will always result in either a single-partition query (devices with less telemetry) or a sub-partition query (devices with more telemetry).

Lastly, with the expansion of the storage capacity of serverless containers to 1 TB, users can benefit from the maximum throughput for serverless containers starting from 5000 RU/s, and they can go beyond 20,000 RU/s depending on the number of partitions available in the container. In addition, it also offers increased burstability.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Holdings Cut by Victory Capital Management Inc.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Victory Capital Management Inc. trimmed its stake in MongoDB, Inc. (NASDAQ:MDBGet Rating) by 50.9% during the fourth quarter, according to its most recent disclosure with the SEC. The fund owned 39,335 shares of the company’s stock after selling 40,804 shares during the quarter. Victory Capital Management Inc. owned approximately 0.06% of MongoDB worth $7,743,000 as of its most recent SEC filing.

Other hedge funds also recently added to or reduced their stakes in the company. Vanguard Group Inc. raised its holdings in shares of MongoDB by 1.0% during the third quarter. Vanguard Group Inc. now owns 6,127,231 shares of the company’s stock valued at $1,216,623,000 after buying an additional 62,303 shares during the last quarter. Franklin Resources Inc. raised its stake in shares of MongoDB by 6.4% during the fourth quarter. Franklin Resources Inc. now owns 1,962,574 shares of the company’s stock valued at $386,313,000 after purchasing an additional 118,055 shares in the last quarter. State Street Corp raised its stake in shares of MongoDB by 1.8% during the third quarter. State Street Corp now owns 1,349,260 shares of the company’s stock valued at $267,909,000 after purchasing an additional 23,846 shares in the last quarter. 1832 Asset Management L.P. raised its stake in shares of MongoDB by 3,283,771.0% during the fourth quarter. 1832 Asset Management L.P. now owns 1,018,000 shares of the company’s stock valued at $200,383,000 after purchasing an additional 1,017,969 shares in the last quarter. Finally, Geode Capital Management LLC raised its stake in shares of MongoDB by 4.5% during the fourth quarter. Geode Capital Management LLC now owns 931,748 shares of the company’s stock valued at $183,193,000 after purchasing an additional 39,741 shares in the last quarter. 89.22% of the stock is currently owned by institutional investors.

MongoDB Stock Up 0.4 %

MongoDB stock opened at $389.99 on Monday. The firm has a market cap of $27.31 billion, a PE ratio of -83.51 and a beta of 1.04. The company has a debt-to-equity ratio of 1.44, a quick ratio of 4.19 and a current ratio of 4.19. The firm’s 50 day moving average price is $295.63 and its two-hundred day moving average price is $239.23. MongoDB, Inc. has a twelve month low of $135.15 and a twelve month high of $398.89.

MongoDB (NASDAQ:MDBGet Rating) last released its quarterly earnings data on Thursday, June 1st. The company reported $0.56 earnings per share for the quarter, beating the consensus estimate of $0.18 by $0.38. MongoDB had a negative net margin of 23.58% and a negative return on equity of 43.25%. The firm had revenue of $368.28 million for the quarter, compared to analyst estimates of $347.77 million. During the same quarter in the prior year, the business earned ($1.15) EPS. The company’s revenue for the quarter was up 29.0% compared to the same quarter last year. On average, research analysts predict that MongoDB, Inc. will post -2.85 EPS for the current fiscal year.

Analysts Set New Price Targets

Several brokerages recently commented on MDB. Guggenheim cut MongoDB from a “neutral” rating to a “sell” rating and boosted their target price for the stock from $205.00 to $210.00 in a report on Thursday, May 25th. They noted that the move was a valuation call. Barclays boosted their target price on MongoDB from $280.00 to $374.00 in a report on Friday, June 2nd. Mizuho boosted their target price on MongoDB from $220.00 to $240.00 in a report on Friday. Stifel Nicolaus boosted their price target on shares of MongoDB from $375.00 to $420.00 in a research report on Friday. Finally, The Goldman Sachs Group boosted their price target on shares of MongoDB from $420.00 to $440.00 in a research report on Friday. One research analyst has rated the stock with a sell rating, two have given a hold rating and twenty-one have issued a buy rating to the company. According to data from MarketBeat, the company currently has an average rating of “Moderate Buy” and an average target price of $349.87.

Insiders Place Their Bets

In other MongoDB news, CTO Mark Porter sold 1,900 shares of the company’s stock in a transaction dated Monday, April 3rd. The stock was sold at an average price of $226.17, for a total value of $429,723.00. Following the transaction, the chief technology officer now directly owns 43,009 shares of the company’s stock, valued at approximately $9,727,345.53. The sale was disclosed in a legal filing with the SEC, which is accessible through the SEC website. In other news, CTO Mark Porter sold 1,900 shares of the stock in a transaction on Monday, April 3rd. The shares were sold at an average price of $226.17, for a total transaction of $429,723.00. Following the transaction, the chief technology officer now owns 43,009 shares in the company, valued at approximately $9,727,345.53. The sale was disclosed in a document filed with the SEC, which is accessible through the SEC website. Also, CAO Thomas Bull sold 605 shares of the stock in a transaction on Monday, April 3rd. The stock was sold at an average price of $228.34, for a total transaction of $138,145.70. Following the transaction, the chief accounting officer now owns 17,706 shares in the company, valued at approximately $4,042,988.04. The disclosure for this sale can be found here. Insiders have sold a total of 108,856 shares of company stock valued at $27,327,511 in the last three months. Insiders own 4.80% of the company’s stock.

MongoDB Profile

(Get Rating)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBGet Rating).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB introduces Atlas to accelerate cloud adoption – IBS Intelligence

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

<!–

–>

By Delisha Fernandes

Today

  • AI Cloud
  • B2C Financial Services
  • Banks

MongoDB

MongoDB, a software provider, announced MongoDB Atlas for Industries, a new program to help organisations accelerate cloud adoption and modernisation by leveraging industry-specific expertise, programs, partnerships, and integrated solutions.

With MongoDB Atlas for Industries, customers can access architectural design reviews, technology partnerships that deliver enhanced solutions for industry-specific challenges, and industry-specific knowledge accelerators to provide highly relevant technology training paths for development teams.

MongoDB Atlas helps financial institutions improve customer experiences by modernising legacy functionality on existing in-house banking systems and building composable architectures that make it easy to integrate industry-leading software tools to get ideas to market faster with the performance and scale they need.

Additionally, the company has become a critical foundation of the Temenos Banking Cloud, with a recent benchmark proving that Temenos and MongoDB can support the needs of even the largest global banks with exceptionally high performance while providing transparent data access through the JSON document model.

“Increasingly, organisations in different industries are moving away from one-size-fits-all technology solutions to improve their competitive advantage and better serve customers. At MongoDB, we have built a team of experts with deep industry knowledge, many of whom used to be customers, so we are not only building better products with the right third-party integrations, but our team can help customers get up and running faster,” said Boris Bialek, Managing Director of Industry Solutions at MongoDB.

“The company has been a trusted AWS Partner for eight years, giving customers running MongoDB Atlas on AWS a more powerful experience for building modern applications,” said Mona Chadha, Director of Infrastructure Partnerships, ISVs at AWS. “Achieving the AWS Financial Services Competency reaffirms MongoDB’s commitment to AWS and financial services customers while demonstrating their breadth and depth of financial services expertise for use cases like fraud detection and real-time payments. “

Previous Article

Moamalat Financial Services and AFS partner to develop Libya’s payment ecosystem

Read More

Next Article

Skript partners with Saasu to automate bank data feeds

Read More

IBSi Daily News Analysis

Bank of Bahrain

June 23, 2023

AI Cloud

How will Bahrain’s retail bank fare amid GCC funding risks?

Read More

<!–

–>

IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
  • Global coverage


Subscribe Now

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Monolith does not always equal “bad” – The Stack

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Modernisation typically means transforming monolithic applications into containerized microservices. However, you don’t have to automatically turn everything in your applications into microservices and completely abandon the monolith. In fact, you can have it both ways!

If there’s one thing that has defined online discourse of the 2020s, it’s that opinions tend to be extremely binary, writes Deepak Goel, CTO, D2iQ. Things are either good or bad. There is little room in between, and when it comes to software architecture–whether to go with a microservices or monolith approach–I can’t help but feel like some nuance is badly needed here.

It is true that microservices have become the default way we think of modernising software, largely because of the scalability, flexibility, and agility that the architecture style affords. But microservices and monoliths each have their distinct merits for different circumstances, and we shouldn’t necessarily be thinking about the architecture in our applications as an either-or scenario.

We’ve seen recently how that nuance can get lost when Amazon’s Prime Video team cut 90% of its cost by moving the deployment of its audio/video service away from Serverless Step Functions to Elastic Container Service. It sparked debate in the developer community about whether this constituted a monolithic approach or just one big microservice, yet plenty of headlines in the media described this move as Amazon ‘dumping’ microservices and ‘switching to monolith’. Renowned tech advisor Adrian Cockcroft didn’t shy away from critiquing what he felt were “bad takes”.

What really happened was the team replaced an assortment of distributed microservices handling video/audio stream analysis processes with an architecture that instead had all components running inside a single Amazon ECS task. This is still a distributed application; it’s just not a purely serverless-based one anymore. We shouldn’t be focusing on the semantics of monolith vs. microservices, rather on how we can optimise the architecture of our software.

How do we know when an architecture should be fully distributed or not? I’m glad you asked.

Believe it or not, microservices add complexity – you don’t always want that

The case for microservices seems simple. They’re small units of deployment that each can be deployed and tested independently. You don’t need to coordinate check-ins from hundreds of developers before deploying, and if you’re a small team, there’s an agility to being able to deploy frequently. The promise of being able to update specific components in an application independently, like a logging tool, without negatively affecting anything else is a huge selling point for microservices.

But the downside is that it also means you have to debug, troubleshoot, update, and deploy a distributed application, and the process for troubleshooting and debugging distributed applications  can be much more difficult than monolithic apps. There is a lot more to network, which can lead to complicated performance issues and weird bottlenecks. Look at it this way: An entire industry has sprung up around how to understand the performance of a distributed system–and it’s still really hard.  Unless you’re already immersed in the trenches of microservices, you might find this an extremely daunting task.

That is to say, microservices aren’t a “win button” you can press. They’re a tradeoff between ease of deployment and complexity, and when you split your applications too much the scales will inevitably tip further towards complexity than many developers are equipped to deal with. Even so, I’ve (frustratingly) seen some incredibly talented people choose complexity for the sake of complexity by overprovisioning their applications.

It’s true that we have some nice technologies for managing microservices such as containerization via Kubernetes. It’s super easy to spin up a Kubernetes cluster. But even the tools for managing microservices have challenges with which developers from legacy application backgrounds might struggle because taking that cluster to production is difficult.

The good news is you don’t have to worry about this if you’re deploying monolithic applications. Again, this should not be construed as a seal of approval to just “monolith” everything. Remember, it doesn’t have to be a microservice vs. monolith decision.

Some parts of your applications will need scalability – others won’t

Not everything in your applications needs to be scalable. You can mix it up. There’s no reason to jump head-first into the lava pit of complexity on Day 1 if you’re a startup. You can make the core of your application monolithic, and the other components microservices-based. You’ll be thankful for this if your application explodes in popularity because it means you can dynamically react to that growth by increasing the proportion of your application broken down into independent services.

Now, I’ve heard people balk at the suggestion of this mixed approach because they don’t deem it feasible to convert monoliths to microservices later in the application’s lifecycle. The reason for this difficulty comes from the fact that monolith components make functional calls whereas microservices components make API calls, and these differ greatly from one another. Although it’s not easy, you can get around this if you treat your function like an API and clearly define boundaries.

The hassle here is substantially less than the headache of an all-microservice approach from the get-go that forces a developer team to deal with a mountain of complexity before it’s even remotely necessary. On the flip side, you don’t want an all-monolith architecture because, aside from the fact that such an architecture generally triggers lazy application writing habits, it could limit your ability to scale the application later on.

Stick with a mixed architecture and you’re more likely to straddle the line of simplicity and scalability.  Istio, an open source service mesh, is a good example of this approach done right. It went through a bumpy start of initially adopting a completely microservice-based architecture, even splitting the control plan into different services, which made deployment radically more complex than it needed to be. In later versions, the Istio team consolidated its two components (Pilot and Mixer) into one. A simpler architecture ended up being a better choice (and it still deploys as a container in a pod).

Platform engineering can make microservices more manageable

How about when you’ve eventually reached a level of maturity and requirements that forces you to make your whole application microservices-based? Abstraction will be key here.

Kubernetes helps manage microservices, but Kubernetes in itself is a very challenging technology to work with once you get to Day 2. It’s the car that requires you to be a wizard mechanic to drive from point A to B, and not everyone in today’s software development climate is an expert mechanic. Most have been taught how to drive just by using the steering wheel, brakes, accelerator, etc. They weren’t taught how to repair or modify an engine; that’s what the mechanics are for.

Similarly, we shouldn’t expect everyone to be an expert in Kubernetes at the infrastructure level, especially not with today’s skills gap. A better route would be to build out small bespoke platform engineering teams–the Kubernetes mechanics–who have inside-out knowledge of the underlying infrastructure behind internal developer platforms that can simplify user interfaces and automate Kubernetes functions for developers. We should be empowering developers to focus more on innovation, not maintenance.

Having user-friendly tools that help manage microservices will make them feel less frighteningly complex. Just don’t make the mistake of thinking you have to make everything microservices on Day 1. Technology should serve developers, and by extension, their end users, not be used as a means to flex one’s technical problem-solving skills.

Deepak Goel serves as Chief Technology Officer at D2iQ, leading the design, development and buildof  products on its Kubernetes platform and enabling day two operations in multi-cluster, multi-tenant Kubernetes environments. Deepak brings over 10 years of experience in the computer industry across networking, distributed systems and security, has co-authored several research papers and holds a number of patents in computer networks, virtualization and multi-core systems.

Don’t forget to subscribe to The Stack. We don’t do pop-ups, you can find the find the button…

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aussie-built database migration tool makes global debut | Computer Weekly

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

An Australian-built database migration tool is now available globally to help organisations migrate traditional relational databases to MongoDB, the open-source NoSQL document database that makes it easier to analyse data.

Developed by MongoDB’s engineering team in Australia, the Relational Migrator analyses relational databases such as Microsoft SQL Server, generates new data schemas and code, and migrates data to the MongoDB Atlas cloud database while running continuous sync jobs with no downtime.

The tool will also generate optimised code for working with data in a modern application. Organisations can then run the application in a testing environment to ensure it is operating as intended before deploying it to production.

MongoDB said the tool addresses the challenges of database migrations which require highly specialised tooling and knowledge to assess existing applications and prepare data for migration. Even then, the migration process can result in data loss, application downtime, and a migrated application that does not function as intended.

“Customers often tell us it’s crucial that they modernise their legacy applications so they can quickly build new end-user experiences that take advantage of game-changing technologies and ship new features at high velocity,” said Sahir Azam, chief product officer at MongoDB.

“But they also say that it’s too risky, expensive, and time consuming, or that they just don’t know how to get started. With MongoDB Relational Migrator, customers can now realise the full potential of software, data, and new technologies like generative AI by migrating and modernising their legacy applications with a seamless, zero-downtime migration experience and without the heavy lifting,” he added.

Australia is a core innovation hub for MongoDB, with its local engineering team playing a “huge role in delivering the long-term product vision for the company, which then ends up in the hands of thousands of organisations around the world,” said Mark Porter, the company’s chief technology officer.

“I am looking forward to seeing more great innovations come out of our local Australian engineering team, especially around AI, data visualisation, analytics and our all-important storage engine,” he added, noting that the team is working on many projects in those areas.

MongoDB Relational Migrator was rolled out globally after being successfully trialled by local Australian customers such as renewable energy provider Powerledger, an energy trading platform.

“We needed to demonstrate our platform’s ability to ingest a much higher volume of data and cater to the one billion users we aim to serve in the future, which required a level of scalability and flexibility that our previous relational database couldn’t offer,” said Vivek Bhandari, chief technology officer of Powerledger.

“Migrating an entire database is a pretty bold and risky endeavour. Our main priorities – and challenges – were to do a complete data platform migration, as well as add in scalability and flexibility without disrupting the platform or hindering data security. Amazingly, using MongoDB Relational Migrator, we didn’t experience any disruption or downtime,” he said.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: JNoSQL 1.0, Liberica NIK 23.0, Micronaut 4.0-RC2, Log4j 3.0-Alpha1, KCDC, JCON

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for June 19th, 2023 features news from JDK 22, JDK 21, updates to: Spring Boot; Spring Security; Spring Vault; Spring for GraphQL; Spring Authorization Server and Spring Modulith; Liberica NIK 23.0, Semeru 20.0.1, Micronaut 4.0-RC2 and 3.9.4, JNoSQL 1.0, Vert.x 4.4.4, updates to: Apache Tomcat, Camel, Log4j and JMeter; JHipster Lite 0.35, KCDC 2023 and JCON Europe 2023.

JDK 21

Build 28 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 27 that include fixes to various issues. Further details on this build may be found in the release notes.

JDK 22

Build 3 of the JDK 22 early-access builds was also made available this past week featuring updates from Build 2 that include fixes to various issues. More details on this build may be found in the release notes.

For JDK 22 and JDK 21, developers are encouraged to report bugs via the Java Bug Database.

Spring Framework

Versions 3.1.1, 3.0.8 and 2.7.13 of Spring Boot 3.1.1 deliver improvements in documentation, dependency upgrades and notable bug fixes such as: difficulty using the from() method defined in the SpringApplication class in Kotlin applications; SSL configuration overwrites other customizations from the WebClient interface; and support for JDK 20, but no defined value for it in the JavaVersion enum. Further details on these versions may be found in the release notes for version 3.1.1, version 3.0.8 and version 2.7.13.

Versions 6.1.1, 6.0.4, 5.8.4, 5.7.9 and 5.6.11 of Spring Security have been released featuring bug fixes, dependency upgrades and new features such as: align the OAuth 2.0 Resource Server documentation with Spring Boot capabilities; a new section in the reference manual to include information related to support and limitations when working with native images; and a migration to Asciidoctor Tabs. More details on these versions may be found in the release notes for version 6.1.1, version 6.0.4, version 5.8.4, version 5.7.9 and version 5.6.11.

The release of Spring Vault 3.0.3 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: a refinement in logging to log the token accessor upon token revocation failure; AWS Identity and Access Management (IAM) authentication added to the EnvironmentVaultConfiguration class; and the inclusion of a key_version attribute to the encrypt() method in the VaultTransitOperations interface. Further details on this release may be found in the release notes.

Versions 1.2.1 and 1.1.5 of Spring for GraphQL have been released featuring bug fixes, dependency upgrades and new features such as: an enhanced GraphQL request body check to prevent a 500 Internal Server Error when a 400 Bad Request is expected; elimination of the IllegalArgumentException due to no defined ConnectionAdapter interface when using existing Java Connection types. More details on these versions may be found in the release notes for version 1.2.1 and version 1.1.5.

Versions 1.1.1, 1.0.3 and 0.4.3 of Spring Authorization Server have been released featuring bug fixes and dependency upgrades. Version 1.1.1 ships with a new feature in which there was a performance enhancement by simply replacing the replaceFirst() method with the substring() method from the String class while using the OAuth2AuthorizationConsent class. Further details on these versions may be found in the release notes for version 1.1.1, version 1.0.3 and version 0.4.3.

The first milestone release of Spring Modulith 1.0.0 ships with bug fixes, dependency upgrades and a new feature that propagates instances of the ExecutorService interface defined in an application into instances of the Scenario class by default. This project has been promoted from its experimental status yielding these breaking changes: a rename of the actuator endpoint from applicationmodules to application-modules; a rename of the group identifier from org.springframework.experimental to org.springframework.modulith; and the removal of the previously deprecated configuration properties, spring.modulith.events.jdbc-*, in the JDBC-based event registry. More details on this release may be found in the release notes.

BellSoft

BellSoft has released version 23.0 of their Liberica Native Image Kit (NIK) featuring: the integration of the ParallelGC garbage collector as an experimental feature; implementation of the JFR ThreadCPULoad event; a removal of type checks from JNI-to-Java call stubs that can break compatibility; and implementation of the user CPU time thread with the getThreadCpuTime() method in the LinuxThreadCpuTimeSupport class.

IBM Semeru Open Edition

IBM has released version 20.0.1 their Semeru Runtime, Open Edition 20.0.1 built on OpenJDK 20.0.1 and Eclipse OpenJ9 0.39.0. Further details on this release may be found in the release notes.

Micronaut

The second release candidate of Micronaut 4.0.0 was also released providing bug fixes, dependency upgrades and these improvements: use of unsafe setters for Jackson; a new UnsafeBeanInstantiationIntrospection interface, a variation of the BeanIntrospection interface that includes an instantiateUnsafe() method for allowing to skip instantiation validation; and support for the All-open compiler plugin for the Kotlin Symbol Processing API.

The Micronaut Foundation has released Micronaut Framework 3.9.4 featuring bug fixes and updates to modules: Micronaut Security and Micronaut Servlet. There was also a dependency upgrade to Netty 4.1.94. More details on this release may be found in the release notes.

Eclipse Foundation

More than six years after its inception in March 2017, version 1.0.0 of JNoSQL, the compatible implementation of the Jakarta NoSQL specification, has been released. New features include: a migration to the jakarta.* namespace, support for the Jakarta Data specification; an implementation of new methods that explore fluent-API for the Graph, Document, Key-Value and Document NoSQL database types; and new methods, count() and exists(), as default on the DocumentManager and ColumnManager interfaces. Before it became a compatible implementation in November 2019, JNoSQL was a project for developers to more easily create NoSQL database applications using Java.

Two months after MicroStream had announced that their Java-native persistence layer had become an Eclipse Project, the first release of Eclipse Store, formerly known as MicroStream Persistence, has been made available to the Java community. Current non-Eclipse integrations in the MicroStream code base, such as Spring Boot, Quarkus and Helidon, will remain open source and the code will be hosted in a new MicroStream repository after they have been refactored to make use of the Eclipse Store and Eclipse Serializer projects.

Eclipse Vert.x 4.4.4 has been released featuring an upgrade to Netty 4.1.94.Final to address CVE-2023-34462, a vulnerability in which an attacker can manipulate the SniHandler class, with no configured idle timeout handler, to buffer the maximum 16MB of data per connection that can quickly lead to an OutOfMemoryError error and potential for a distributed denial of service. Further details on this release may be found in the release notes.

Apache Software Foundation

The Apache Tomcat team has disclosed that versions 11.0.0-M5, 10.1.8, 9.0.74 and 8.5.88 are affected by CVE-2023-34981, a vulnerability in which a regression in the fix for Bug 66512 could lead to an information leak if a response did not include any HTTP headers, then no Apache JServ Protocol (AJP) SEND_HEADERS message would be sent for the response. This was fixed in Bug 66591 and developers are encouraged to migrate to minimal versions 11.0.0-M6, 10.1.9, 9.0.75 or 8.5.89.

The release of Apache Camel 3.20.6 provides bug fixes and improvements such as: ensure that the REQUEST_CONTEXT and RESPONSE_CONTEXT headers are mapped when populating a Camel CXF message from Camel Message; and enhancements to the Camel JBang module to support OpenAPI. More details on this release may be found in the release notes.

Similarly, the release of Apache Camel 3.14.9 ships with these bug fixes: use the createTempFile() method in the Files class within the FileConverter class instead of directly creating a file; and a potential NullPointerException when using XML Tokenize on an Woodstox XML namespace. Further details on this release may be found in the release notes.

The first alpha release of Apache Log4j 3.0.0 delivers notable changes such as: allow plugins to be created through more flexible dependency injection patterns; split support for Kafka, ZeroMQ, CSV, JMS, JDBC and Jackson to their own modules; and removal of support for the Serializable interface in several classes and interfaces that include Message, Layout, LogEvent, Logger, and ReadOnlyStringMap.

Apache JMeter 5.6.0 has been released featuring bug fixes and new features such as: use Caffeine for caching HTTP headers instead of the Apache Commons Collections LRUMap class; use the Java ServiceLoader class for loading plugins instead of classpath scanning for improved startup; and improved computation when many threads actively produce samplers by using the Java LongAdder and similar concurrency classes to avoid synchronization in the Calculator class. More details on this release may be found in the release notes.

JHipster

The JHipster team has released version 0.35.0 of JHipster Lite with bug fixes, improvements in documentation, dependency upgrades and an improved Sonar analysis that provides more error details and an option to wait. Further details on this release may be found in the release notes.

Kansas City Developer Conference

The 2023 Kansas City Developer Conference (KCDC) was held at the Kansas City Convention Center in Kansas City, Missouri this past week featuring speakers from the Java community who presented workshops and sessions on topics such as: Java, architecture, cloud, data science, JavaScript, project management and security. The conference also featured puppies available for adoption from the Great Plains SPCA.

JCON Europe

Also this past week, JCON Europe 2023 was held at the Cinedom in Kön, Germany featuring speakers from the Java community who presented sessions on topics such as: Java, developer productivity engineering, security, web components, microservices and cloud native.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Patch Work, SUSE Stitches Digital Infrastructure Threads Tighter – Forbes

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Technology has layers. A lot like the ogre’s onion analogy in Shrek, technology comes in layers because we have an upper front-end user interface layer, a middleware connectivity layer and a lower substrate back-end, all of which must function in beautiful harmony if we are to enjoy apps the way we want them. We can classify technology stacks into tiers that extend beyond those three basic forms, but you get the point i.e. the modern digital stack is a fabric of interlaced services, functions, computing resources and threads.

It’s not hard to extend the fabric analogy further, when we need to fix tears, breaks, misconfigurations and frailties in our IT systems today, we talk about ‘patching’ with application code designed to remediate, update and obviate the risk of an IT service becoming further degraded, less function or (at worst) compromized from a security perspective.

A trend report sponsored by enterprise-grade open source company SUSE this year suggested that more than 88% of respondents reported experiencing more than one cloud-related security incident in the past year. Whether it’s half that figure or somewhere in between isn’t that important, the fact is that cloud-centric digital systems are expanding and – therefore – SUSE says it continues to build out its infrastructure security stack to ensure that customers, partners and open source communities can safely run their application workloads no matter if it’s in the cloud, in the edge or in datacenters to make their business more resilient.

“Every enterprise must maximize its business resilience to face increasingly sophisticated and potentially devastating, digital attacks,” said Dr. Thomas Di Giacomo, chief technology and product officer of SUSE. “That means they need to get serious about the security posture of their complex workloads, particularly AI/ML platforms where the protection of customer data is under intense scrutiny. SUSE’s approach to supply chain security along with the latest announcements allows customers to safely adapt the advantages of a cloud-native world and to secure their digital business.

Live patching, no needles required

The latest version of the company’s SUSE Linux Enterprise 15 platform is denoted by the perhaps less-than-snappily titled Service Pack 5 (SLE 15 SP5) nametag. This is technology designed to deliver the high-performance computing capabilities essential for AI/ML workloads – and this iteration of the platform of course works in lockstep with Rancher (which SUSE bought in 2020) a widely adopted Kubernetes platform. Because this platform now extends the company’s Live Patching capabilities, the suggestion from SUSE is that this is a leg up to improve business continuity, security and legislative compliance.

Platform-level progressions of this sort normally reflect major technology industry trends, movements and (let’s be honest) innovation hype cycles.

Among the motivators driving this new Linux distribution are the technology’s ability to support the spectrum of so-called Confidential Computing, an approach secure customer data management where information is processed in the public cloud and at the Internet of Things (IoT) edge. This is said to allow organizations to run fully encrypted virtual machines (VMs) in any computing environment. As such, SLES 15 SP5 supports the latest microprocessor chipset innovations from AMD, Arm, IBM and Intel.

“Sustainably and securely meeting cloud computing performance demands requires energy-efficient, specialized processing alongside a strong software ecosystem. Our ongoing work with SUSE to expand the SUSE Linux Enterprise portfolio enables the Arm ecosystem to bring their innovative solutions to market faster on a well-established operating system (OS) like SLE Micro, with confidence in security proven by its achieving PSA Certified Level 1,” said Andrew Wafaa, senior director of software communities, Arm.

SUSE’s report also found that 88% of respondents (there’s that same figure again) agreed with the proposition that their teams would locate, proliferate and mitigate more application and data workloads in cloud environments and at the IoT edge if they could be more certain that their data couldn’t be tampered with. To ensure customers and partners are protected, Rancher by SUSE builds off its spring 2023 launch with new security-focused product updates that include optimized storage, support for hardened VMs and improved vulnerability and compliance management.

Among the many developments in this release, the company notes that Adaptable Linux (ALP) brings enterprise Linux forward into modern cloud environments by evolving to a more ‘modular Linux’, running containerized and virtualized workloads.

“SUSE ALP is an open source project that provides self-healing and self-management, executing tasks affecting both the OS and the container layer. This allows users to focus on their workloads while abstracting from hardware and applications brings enterprise Linux forward into modern cloud environments by evolving to a modular Linux, running containerized and virtualized workloads. SUSE ALP is an open source project that provides self-healing and self-management, executing tasks affecting both the OS and the container layer. This allows users to focus on their workloads while abstracting from hardware and applications,” notes SUSE, in a platform specifications document.

Pizza-grade open source credentials

Overall, we know that enterprise grade open source is getting stronger, enjoying more widespread deployment and being maintained, supported and extended – plus of course patched and augmented – in ways that we would not have seen around the turn of the millennium.

SUSE claims and promises to be ‘putting the open back in open source’ and its industry events are indeed fully populated by hard-core software programming engineers (wearing shorts in winter, consuming pizza & soda, occaisionally dying their hair green to match SUSE’s corporate colors – the stereotype is alive and well, but is now geek-chic and commands respect) that would quickly call out any form of team development established on anything other than a fair system of meritocracy and effort.

That said, SUSE is still a commercial enterprise and will want to sell enterprise contracts to support business-critical Linux use cases that come with a side order of enterprise container management and edge solutions that fall in line with partner-connected deals and more. But why not? That’s fine, as long as the open heart is pure and we know it is in this case – and that goes for patches of patching work too.

Despite their popularization among millennials and Gen-Z types, developers might not be taking up sewing or knitting anytime soon, not unless it weaves through a Linux kernel.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Embraces AI & Reduces Developer Friction With New Features – Forbes

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Even if you haven’t heard of MongoDB, odds are good that you touch it in your daily online life. MongoDB has enabled more than 43,000 organizations to build solutions leveraging MongoDB technology, including some of the biggest names in technology, healthcare, telecom, and financial services. The company’s horizontal document-oriented (often called NoSQL) database technology underpins a broad swath of workloads that all need modern data services – needs that often don’t directly map to the constraints of traditional relational databases.

Servicing the quickly evolving needs of modern application development requires rapid innovation and fast product cycles. MongoDB demonstrated both last week at its MongoDB.local 2023 event in New York City, introducing a compelling set of new features and services.

The announcements cover a wide breadth of territory, with new capabilities to leverage the latest AI technology, features that enable greater developer productivity, ease the burden of enterprise application development, and even a new program to simplify deploying MongoDB technology into a targeted set of verticals. There’s a lot to delve into.

Enabling AI

It’s impossible to talk about application development today without touching on artificial intelligence. Generative AI, typified by large language models (LLMs) such as ChapGPT, capture headlines daily. The question technology companies and IT practitioners alike most often ask me is how AI will affect them. MongoDB this past week illustrated how generative AI impacts the data plane.

MongDB Atlas Vector Search

Technologies such as generative AI changes how we think about managing the data that feeds AI-driven systems. Language processing, for example, utilize attributes on data called “vectors.”

You can think of vector embeddings as tags placed on data as an AI model that define the relationship between words. These vectors are then used as efficient shortcuts when running generative AI models (this is a simplistic explanation of vectors; interested readers should read this more in-depth explanation).

MongoDB’s new MongoDB Atlas Vector Search is designed to simplify the development of AI language and generative AI applications. The new capability allows vector embedding directly on data stored by MongoDB, allowing new generative AI applications to be quickly and efficiently developed on MongoDB Atlas.

MongoDB Atlas Vector Search is also integrated with the open-source LangChain and LlamaIndex frameworks with tools for accessing and managing LLMs for various applications.

MongoDB AI Innovators Program

Building and deploying applications leveraging the latest in AI technology can be daunting. The concepts, tools, and even infrastructure significantly differ from more traditional software development approaches. AI applications can also require multiple iterations of model training as the application evolves, adding significant development costs.

Last week, recognizing the unique challenges of developing AI applications, MongoDB announced its new MongoDB AI Innovators Program, designed to ease the unique burdens of developing AI applications. The new program offers several benefits, including providing eligible organizations with up to $25,000 in credits for MongoDB Atlas.

The AI Innovators Program also includes engagement opportunities with MongoDB to fast-track strategic partnerships and joint go-to-market activities with what the company calls its AI Amplify track. Companies participating in AI Amplify track have their submissions evaluated by MongoDB to gauge the appropriateness of a potential partnership. MongoDB technical experts are also available for solutions architecture and to help identify compelling use cases to use in co-marketing opportunities.

Finally, MongoDB is making its partner ecosystem available to program participants. Organizations participating in the MongoDB AI Innovators Program will have prioritized access to opportunities with MongoDB Partners, and eligible organizations can be fast-tracked to join the MongoDB Partner Ecosystem to build seamless, interoperable integrations and joint solutions. MongoDB has over 1,000 partners, making this an attractive benefit of the program.

New MongoDB Atlas Capabilities

In addition to the new vector search capabilities already mentioned, there were four additional capabilities introduced into MongoDB Atlas:

  • MongoDB Atlas Search Nodes now provide dedicated infrastructure for search use cases so customers can scale independently of their database to manage unpredictable spikes and high-throughput workloads with greater flexibility and operational efficiency.
  • MongoDB Atlas Stream Processing transforms building event-driven applications that react and respond in real-time by unifying how developer teams work with data-in-motion and data-at-rest.
  • MongoDB Atlas Time Series collections now make time-series workloads more efficient at scale for use cases from predictive maintenance for factory equipment to automotive vehicle-fleet monitoring to financial trading platforms.
  • New multi-cloud options for MongoDB Atlas Online Archive and Atlas Data Federation now enable customers to seamlessly tier and query data in Microsoft Azure and in addition to Amazon Web Services.

Keeping with its theme of simplifying the developer experience, these new features should ease the burden of developing applications using MongoDB Atlas as an intelligent data platform.

Reducing Developer Friction

MongoDB is a foundational component for data modernization, but it is only part of the solution. Mongo recognizes this, calling its technology a “Developer Data Platform.” The phrase emphasizes the importance of empowering developers to build next-generation AI-enabled applications, often while also using AI. MongoDB empowers developers by delivering a data plane offering the capabilities most needed for modern applications.

Mongo announced new programming language support to facilitate adoption across multiple environments. The company added support for server-side Kotlin applications (Kotlin is a programming language designed for cross-platform application development). There is also new support for data processing and analytics with Python as MongoDB makes its open-source PyMongoArrow library generally available, allowing developers to efficiently convert data stored in MongoDB using some of the most popular Python-based analytics frameworks.

MongoDB is also adding additional support for deploying and managing MongoDB using Amazon AWS infrastructure-as-code (IaC) capabilities. MongoDB released a new integration with the AWS Cloud Development Kit (CDK), allowing developers to manage MongoDB Atlas resources with C#, Go, Java, and Python. This is a significant enabler for developers deploying on AWS.

MongoDB also simplified its Kubernetes integration with improvements to its MongoDB Atlas Kubernetes Operator. The new functionality allows developers to install MongoDB’s horizontal document-oriented (often called NoSQL) database technology underpins a broad swath of workloads that all need modern data services – needs that often don’t directly map to the constraints of traditional relational databases.

Finally, MongoDB announced its new MongoDB Relational Migrator tool. The new tool makes migrating from traditional legacy databases into a MongoDB environment easier and significantly faster. MongoDB Relational Migrator analyzes legacy databases, automatically generates new data schema and code, and then executes a seamless migration to MongoDB Atlas without downtime. This capability will reduce the pain often experienced when moving data into a new environment from a legacy data store.

Analyst’s Take

MongoDB held an investor conference parallel to its developer-focused MongoDB.local event. At the investor event, MongoDB’s chief product officer, Sahir Azam, described how the company builds its product strategy and GTM activities around its understanding of the customer’s journey.

The features, and new business opportunities, announced by MongoDB make sense to anyone familiar with the development of a modern data-driven application. The new offerings help developers leverage MongoDB technology to create new applications while also implementing the features required to develop next-generation AI-enabled solutions.

There’s no question that developers appreciate what the company is delivering. As an enabling technology for other applications, MongoDB’s approach not only makes sense, it’s also necessary. It’s also paying off.

MongoDB has beaten consensus estimates in its earnings for seventeen straight quarters, with its most recent earnings besting EPS estimates by nearly 195%. The most recent quarter also saw Mongo growing its top-line revenue by 29% year-over-year. The company has increased revenue by 8x since 2018. That’s a tremendous vote of confidence from its customers, especially in a market that’s still hampering growth for nearly every foundational technology company.

MongoDB competes in a crowded segment, and we see innovation coming from its closest competitors, evidenced by recent announcements from competitors such as Elastic. At the same time, MongoDB stands out in this intensely competitive environment with its relentless focus on improving the experience for developers, quickly adapting to new trends in data analysis and AI, and implementing programs that allow its customers to launch new applications quickly. Seventeen straight earnings beats, over a thousand partners, and more than 43,000 customers all show that MongoDB is earning its success.

Disclosure: Steve McDowell is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. Mr. McDowell does not hold any equity positions with any company mentioned in this article.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Receives Average Rating of “Moderate Buy” from Brokerages

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB, Inc. (NASDAQ:MDBGet Rating) have earned an average rating of “Moderate Buy” from the twenty-four research firms that are covering the company, MarketBeat Ratings reports. One analyst has rated the stock with a sell rating, two have given a hold rating and twenty-one have given a buy rating to the company. The average 12-month target price among analysts that have issued a report on the stock in the last year is $349.87.

Several research firms have recently weighed in on MDB. Robert W. Baird lifted their target price on MongoDB from $390.00 to $430.00 in a research report on Friday. Wedbush decreased their price objective on MongoDB from $240.00 to $230.00 in a report on Thursday, March 9th. Barclays lifted their price objective on MongoDB from $280.00 to $374.00 in a report on Friday, June 2nd. Oppenheimer lifted their price objective on MongoDB from $270.00 to $430.00 in a report on Friday, June 2nd. Finally, Sanford C. Bernstein lifted their price objective on MongoDB from $257.00 to $424.00 in a report on Monday, June 5th.

Insider Buying and Selling

In other MongoDB news, CRO Cedric Pech sold 720 shares of the company’s stock in a transaction on Monday, April 3rd. The stock was sold at an average price of $228.33, for a total value of $164,397.60. Following the completion of the sale, the executive now directly owns 53,050 shares in the company, valued at approximately $12,112,906.50. The sale was disclosed in a document filed with the SEC, which can be accessed through this link. In related news, CRO Cedric Pech sold 720 shares of the stock in a transaction dated Monday, April 3rd. The stock was sold at an average price of $228.33, for a total transaction of $164,397.60. Following the transaction, the executive now owns 53,050 shares of the company’s stock, valued at approximately $12,112,906.50. The transaction was disclosed in a filing with the SEC, which is available through this hyperlink. Also, CAO Thomas Bull sold 605 shares of the stock in a transaction dated Monday, April 3rd. The stock was sold at an average price of $228.34, for a total value of $138,145.70. Following the transaction, the chief accounting officer now directly owns 17,706 shares in the company, valued at approximately $4,042,988.04. The disclosure for this sale can be found here. Over the last ninety days, insiders sold 108,856 shares of company stock worth $27,327,511. 4.80% of the stock is owned by corporate insiders.

Institutional Inflows and Outflows

Several institutional investors have recently added to or reduced their stakes in MDB. 1832 Asset Management L.P. increased its holdings in shares of MongoDB by 3,283,771.0% in the fourth quarter. 1832 Asset Management L.P. now owns 1,018,000 shares of the company’s stock valued at $200,383,000 after purchasing an additional 1,017,969 shares during the period. Price T Rowe Associates Inc. MD lifted its position in shares of MongoDB by 13.4% in the first quarter. Price T Rowe Associates Inc. MD now owns 7,593,996 shares of the company’s stock valued at $1,770,313,000 after acquiring an additional 897,911 shares in the last quarter. Renaissance Technologies LLC lifted its position in shares of MongoDB by 493.2% in the fourth quarter. Renaissance Technologies LLC now owns 918,200 shares of the company’s stock valued at $180,738,000 after acquiring an additional 763,400 shares in the last quarter. Norges Bank purchased a new stake in shares of MongoDB in the fourth quarter valued at $147,735,000. Finally, Champlain Investment Partners LLC purchased a new position in MongoDB during the first quarter worth about $89,157,000. Institutional investors and hedge funds own 89.22% of the company’s stock.

MongoDB Trading Up 0.4 %

MDB stock opened at $389.99 on Friday. The firm has a fifty day moving average price of $295.63 and a 200 day moving average price of $238.81. MongoDB has a fifty-two week low of $135.15 and a fifty-two week high of $398.89. The company has a debt-to-equity ratio of 1.44, a quick ratio of 4.19 and a current ratio of 4.19. The firm has a market cap of $27.31 billion, a PE ratio of -83.51 and a beta of 1.04.

MongoDB (NASDAQ:MDBGet Rating) last issued its quarterly earnings results on Thursday, June 1st. The company reported $0.56 earnings per share (EPS) for the quarter, topping analysts’ consensus estimates of $0.18 by $0.38. The company had revenue of $368.28 million during the quarter, compared to the consensus estimate of $347.77 million. MongoDB had a negative return on equity of 43.25% and a negative net margin of 23.58%. MongoDB’s revenue was up 29.0% on a year-over-year basis. During the same period last year, the company earned ($1.15) earnings per share. Sell-side analysts anticipate that MongoDB will post -2.85 earnings per share for the current fiscal year.

MongoDB Company Profile

(Get Rating

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Distributed SQL and the Internet of Things: Managing Data at Scale – CityLife

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

The Internet of Things (IoT) has become an integral part of our daily lives, with billions of devices connected to the internet, collecting and sharing data. This data explosion has led to a growing need for efficient and scalable data management solutions. One such solution that has emerged as a game-changer in this space is Distributed SQL.

Distributed SQL is a new breed of databases that combines the best of both worlds: the scalability and fault tolerance of NoSQL databases, and the strong consistency and transactional capabilities of traditional relational databases. This powerful combination enables organizations to manage their IoT data at scale, while ensuring data integrity and consistency.

One of the key challenges faced by organizations in the IoT space is the sheer volume of data generated by connected devices. Traditional databases, which were designed for smaller, more predictable workloads, struggle to keep up with the demands of IoT data. This is where Distributed SQL comes in, offering a scalable solution that can handle the ever-increasing data volumes.

Distributed SQL databases are designed to scale horizontally, meaning that they can easily accommodate more data by adding more nodes to the system. This makes them an ideal choice for IoT applications, which often require the ability to store and process massive amounts of data. Additionally, Distributed SQL databases are built to be fault-tolerant, ensuring that data remains available even in the face of hardware failures or network outages. This is particularly important for IoT applications, where data loss or downtime can have significant consequences.

Another challenge faced by organizations in the IoT space is the need for real-time data processing and analysis. IoT devices generate data continuously, and this data often needs to be processed and analyzed in real-time to enable timely decision-making and action. Distributed SQL databases are well-suited to this task, as they are designed to support high-performance, low-latency transactions.

Moreover, Distributed SQL databases offer strong consistency guarantees, ensuring that data remains accurate and up-to-date across all nodes in the system. This is particularly important for IoT applications, where data consistency is critical for accurate decision-making and analytics. In contrast, many NoSQL databases sacrifice consistency in favor of availability and partition tolerance, which can lead to data inconsistencies and inaccuracies.

In addition to these technical benefits, Distributed SQL databases also offer several operational advantages for organizations managing IoT data at scale. For instance, they typically require less manual intervention and maintenance than traditional databases, as they are designed to automatically handle tasks such as data distribution, replication, and recovery. This can help organizations reduce the operational overhead associated with managing large-scale IoT data.

Furthermore, Distributed SQL databases are often cloud-native, meaning that they are designed to run seamlessly in cloud environments. This enables organizations to take advantage of the flexibility, scalability, and cost-efficiency offered by cloud computing, while still maintaining the performance and consistency guarantees of traditional relational databases.

In conclusion, Distributed SQL is a powerful solution for organizations looking to manage their IoT data at scale. By combining the scalability and fault tolerance of NoSQL databases with the strong consistency and transactional capabilities of traditional relational databases, Distributed SQL offers a robust and efficient way to store, process, and analyze the massive amounts of data generated by IoT devices. As the IoT continues to grow and evolve, Distributed SQL databases are poised to play an increasingly important role in helping organizations harness the full potential of their IoT data.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.