Mobile Monitoring Solutions

Search
Close this search box.

DataStax to Deliver High-performance RAG Solution with 20x Faster Embeddings and …

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Cutting-edge Collaboration Enables Enterprises to Use DataStax Astra DB with NVIDIA Inference Microservices to Create Instantaneous Vector Embeddings to Fuel Real-time GenAI Use Cases

SANTA CLARA, Calif., March 18, 2024–(BUSINESS WIRE)–DataStax, the generative AI data company, today announced it is supporting enterprise retrieval-augmented generation (RAG) use cases by integrating the new NVIDIA NIM inference microservices and NeMo Retriever microservices with Astra DB to deliver high-performance RAG data solutions for superior customer experiences.

With this integration, users will be able to create instantaneous vector embeddings 20x faster than other popular cloud embedding services and benefit from an 80% reduction in cost for services.

Organizations building generative AI applications face the daunting technological complexities, security, and cost barriers associated with vectorizing both existing and newly acquired unstructured data for seamless integration into large language models (LLMs). The urgency of generating embeddings in near-real time and effectively indexing data within a vector database on standard hardware further compounds these challenges.

DataStax is collaborating with NVIDIA to help solve this problem. NVIDIA NeMo Retriever generates over 800 embeddings per second per GPU, pairing well with DataStax Astra DB, which is able to ingest new embeddings at more than 4000 transactions per second at single-digit millisecond latencies, on low-cost commodity storage solutions/disks. This deployment model greatly reduces total cost of ownership for users and performs lightning-fast embedding generation and indexing.

With embedded inferencing built on NVIDIA NeMo and NVIDIA Triton Inference Server software, DataStax AstraDB vector performance of RAG use cases running on NVIDIA H100 Tensor Core GPUs achieved 9.48ms latency embedding and indexing documents, which is a 20x improvement.

When combined with NVIDIA NeMo Retriever, Astra DB and DataStax Enterprise (DataStax’s on-premise offering) provide a fast vector database RAG solution that’s built on a scalable NoSQL database that can run on any storage medium. Out-of-the-box integration with RAGStack (powered by LangChain and LlamaIndex) makes it easy for developers to replace their existing embedding model with NIM. In addition, using the RAGStack compatibility matrix tester, enterprises can validate the availability and performance of various combinations of embedding and LLM models for common RAG pipelines.

DataStax is also launching, in developer preview, a new feature called Vectorize. Vectorize performs embedding generations at the database tier, enabling customers to leverage Astra DB to easily generate embeddings using its own NeMo microservices instance, instead of their own, passing the cost savings directly to the customer.

“In today’s dynamic landscape of AI innovation, RAG has emerged as the pivotal differentiator for enterprises building genAI applications with popular large language frameworks,” said Chet Kapoor, chairman and CEO, DataStax. “With a wealth of unstructured data at their disposal, ranging from software logs to customer chat history, enterprises hold a cache of valuable domain knowledge and real-time insights essential for generative AI applications, but still face challenges. Integrating NVIDIA NIM into RAGStack cuts down the barriers enterprises are facing to bring them the high-performing RAG solutions they need to make significant strides in their genAI application development.”

“At Skypoint, we have a strict SLA of five seconds to generate responses for our frontline healthcare providers,” said Tisson Mathew, CEO and founder of Skypoint. “Hitting this SLA is especially difficult in the scenario that there are multiple LLM and vector search queries. Being able to shave off time from generating embeddings is of vast importance to improving the user experience.”

“Enterprises are looking to leverage their vast amounts of unstructured data to build more advanced generative AI applications,” said Kari Briski vice president of AI software, NVIDIA. “Using the integration of NVIDIA NIM and NeMo Retriever microservices with the DataStax Astra DB, businesses can significantly reduce latency and harness the full power of AI-driven data solutions.”

For more information, read the DataStax blog on this collaboration with NVIDIA and the RAG capabilities on Astra DB here.

Additional Resources

About DataStax

DataStax, the GenAI data company, helps developers and companies successfully create a bold new world through GenAI. We offer a one-stop Generative AI Stack with everything needed for a faster, easier, path to production for relevant and responsive GenAI applications. DataStax delivers a RAG-first developer experience, with first-class integrations into leading AI ecosystem partners, so we work out with developers’ existing stacks of choice. Anyone can quickly build smart, high-growth AI applications at unlimited scale, on any cloud. Hundreds of the world’s leading enterprises, including Audi, Bud Financial, Capital One, SkyPoint Cloud, and many more, rely on DataStax to deliver GenAI. Learn more at DataStax.com.

© 2024 DataStax Inc., All Rights Reserved. DataStax is a registered trademark of DataStax, Inc. and its subsidiaries in the United States and/or other countries.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240318806562/en/

Contacts

Regan Schiappa
press@datastax.com

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Thinking about trading options or stock in NVIDIA, Mongodb, MicroStrategy, Carnival, or … – KXAN

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Thinking about trading options or stock in NVIDIA, Mongodb, MicroStrategy, Carnival, or AT&T?

/* Style Definitions */
span.prnews_span
{
font-size:8pt;
font-family:”Arial”;
color:black;
}
a.prnews_a
{
color:blue;
}
li.prnews_li
{
font-size:8pt;
font-family:”Arial”;
color:black;
}
p.prnews_p
{
font-size:0.62em;
font-family:”Arial”;
color:black;
margin:0in;
}

.press-releases .partner-content .content table[cellpadding=”0″] td,
.press-releases .partner-content .content table[cellpadding=”0″] th {
padding: 0;
}

.press-releases .partner-content .content table[border=”0″] td,
.press-releases .partner-content .content table[border=”0″] th {
border: none;
}

.press-releases .partner-content .content td,
.press-releases .partner-content .content th {
border: 1px solid #000;
}

.press-releases .partner-content .content table p {
margin-top: unset;
}

.press-releases .partner-content .content p a {
word-break:normal;
}

NEW YORK, March 18, 2024 /PRNewswire/ — InvestorsObserver issues critical PriceWatch Alerts for NVDA, MDB, MSTR, CCL, and T.

Click a link below then choose between in-depth options trade idea report or a stock score report.

Options Report – Ideal trade ideas on up to seven different options trading strategies. The report shows all vital aspects of each option trade idea for each stock.

Stock Report – Measures a stock’s suitability for investment with a proprietary scoring system combining short and long-term technical factors with Wall Street’s opinion including a 12-month price forecast.

  1. NVDA: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=NVDA&prnumber=202403183
  2. MDB: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=MDB&prnumber=202403183
  3. MSTR: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=MSTR&prnumber=202403183
  4. CCL: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=CCL&prnumber=202403183
  5. T: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=T&prnumber=202403183

(Note: You may have to copy this link into your browser then press the [ENTER] key.)

InvestorsObserver provides patented technology to some of the biggest names on Wall Street and creates world-class investing tools for the self-directed investor on Main Street. We have a wide range of tools to help investors make smarter decisions when investing in stocks or options.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/thinking-about-trading-options-or-stock-in-nvidia-mongodb-microstrategy-carnival-or-att-302091459.html

SOURCE InvestorsObserver

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


WildFly 31 Delivers Support for Jakarta EE 10 and the New WildFly Glow Provisioning Tools

MMS Founder
MMS Shaaf Syed

Article originally posted on InfoQ. Visit InfoQ

The WildFly community has released WildFly 31, which delivers support for Jakarta MVC 2.1, a CLI tool, and WildFly Glow, a Maven plugin that analyzes the usage of subsystems and suggests a more lightweight runtime, e.g., running in Docker containers. WildFly 31 also introduces stability levels so users can choose features more carefully for the different use cases. Other updates include an upgrade to MicroProfile 6.1, Hibernate 6.4.2, and JakartaEE 10. WildFly core now also supports JDK 21, the latest LTS version of the JDK.

According to Brian Stansberry, Sr. Principal Software Engineer at Red Hat, the journey to Glow started a couple of years ago by introducing Galleon, a provisioning tool enabling containerized deployments. Glow can scan the WAR file content and identify the required Galleon layers based on rulesets. e.g., jaxrs rule. WildFly Glow also detects, e.g., the data source being used and suggests the correct add-ons required. Once the identification process is complete, the correct Galleon layers are then selected. Furthermore, the Glow CLI enables users to provision a WildFly server, a bootable JAR or to produce a Docker container image that can run on orchestration platforms like Kubernetes.

WildFly 31 implements the JakartaEE 10 Platform, the Web Profile, and the Core Profile. The Core Profile can run with the latest LTS of JDK 21, whereas the JakartaEE 10 and Core Profile can run with JDK 11 and 17. The recommended version to run WildFly 31 is JDK 17. WildFly 31 also implements most of the core MicroProfile specifications having dropped support for MicroProfile Metrics in favor of Micrometer, rendering it incompatible with the MicroProfile specifications. The WildFly team has also shared the compatibility evidence for MicroProfile 6.1 that provides more details on the implemented specifications. Furthermore, this release also introduces two new quickstarts, Micrometer and MicroProfile LRA.

A new community feature allows users to export WildFly configuration so that any other instance of WildFly can be booted with it, thereby creating a configuration replica. Community features are part of the stability levels also introduced in this release. Stability levels (experimental, preview, community, default) give users more visibility into upcoming and experimental features and provide the ability to opt out of features.

The WildFly team also seems to have focused on more accessible learning options. All quickstarts are now deployable as ZIP files or bootable JARs. Where applicable, helm charts are used for deployment on Kubernetes derivatives like Red Hat OpenShift. Furthermore, the quickstarts also include tests such as smoke tests, the Getting Started maven archetype, the Get Started page, and user guides.

A detailed list of release notes is available on the WildFly release page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Eric Evans Encourages DDD Practitioners to Experiment with LLMs

MMS Founder
MMS Thomas Betts

Article originally posted on InfoQ. Visit InfoQ

In his keynote presentation at Explore DDD, Eric Evans, author of Domain-Driven Design, argued that software designers need to look for innovative ways to incorporate large language models into their systems. He encouraged everyone to start learning about LLMs and conducting experiments now, and sharing the results and learnings from those experiments with the community.

Evans believes there can be good combinations of DDD and AI-oriented software. He said, “because some parts of a complex system never fit into structured parts of domain models, we throw those over to humans to handle. Maybe we’ll have some hard-coded, some human-handled, and a third, LLM-supported category.”

Using language familiar to DDD practitioners, he said a trained language model is a bounded context. Instead of using models trained on broad language and that are intended for general purposes, such as ChatGPT, training a language model on a ubiquitous language of a bounded context makes it far more useful for specific needs. 

With generic LLMs, someone has to write careful, artificial prompts to achieve a desired response. Instead, Evans proposed having several fine-tuned models, each intended for a different purpose. He sees this as a strong separation of concerns. He predicts future domain modelers will identify tasks and subdomains that involve interpreting natural language input, and will naturally slot that into their designs. The infrastructure isn’t quite ready for that yet, but trends suggest it will come soon. 

Evans emphasized that his thoughts must be considered in the context of when he was speaking, on March 14, 2024, because the landscape is changing so rapidly. Six months ago, he did not know enough about the subject, and a year from now his comments may be irrelevant. He compared our current situation to the late 90s, when he learned multiple ways to build a website, none of which would be applicable today.

Throughout the conference, other notable voices in the DDD community responded to Evans’ thoughts. Vaughn Vernon, author of Implementing Domain-Driven Design, was largely supportive of the idea of finding novel uses for LLMs beyond the common chatbot. In his vision for self-healing software, he sees a place for a tool like ChatGPT to respond to runtime exceptions, and be a “fix suggester” that automatically creates a pull request with suggested code to resolve the bug.

However, some people remain skeptical about all the benefits of LLMs. During a panel discussion on the intersection of DDD and LLMs, Chris Richardson, author of Microservice Patterns, expressed concerns about the high monetary and computing costs of LLMs. When Richardson wondered if any service that operated an LLM could ever turn a profit, Evans replied that fine tuning makes an inexpensive model cheaper and faster than an expensive model. Fellow panelist, Jessica Kerr, a principal developer advocate at Honeycomb.io, said, “we need to find what’s valuable, then make it economical.”

During his keynote, Evans went into detail about some of the experiments he conducted as part of his personal education into the capabilities of LLMs. At first, working with game designer Reed Berkowitz, he tried using ChatGPT to make a non-player character (NPC) respond to player input. An evolution of prompt engineering led him to the realization that the responses were more consistent if broken into smaller chunks, rather than one, long prompt. This approach follows his ideas of DDD breaking down complex problems.

The need for smaller, more specialized prompts naturally led to wanting a more specialized model, which would both provide better output while also being more performant and cost effective. His goal in explaining his research methods was to show how useful it can be to experiment with the technology. Although frustrating at times, the process was immensely rewarding, and many attendees said they could relate to the sense of satisfaction when you learn how to get something new working for the first time.

The Explore DDD conference took place in Denver, Colorado, from March 12-15, 2024. Most presentations during the conference were recorded, and will be posted to the @ExploreDDD YouTube channel over the next several weeks, and shared on the Explore DDD LinkedIn page, starting with the opening keynote by Eric Evans.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Wear OS Gets New, More Efficient Text-to-Speech Engine

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Google has announced a new text-to-speech engine for Wear OS, its Android variant aimed at smartwatches and other wearables, supporting over 50 languages and faster than its predecessor thanks to using smaller ML models.

According to Google’s Ouiam Koubaa and Yingzhe Li, the new text-to-speech engine is particularly geared for low-memory devices, such as those that power services for which wearables are most suited, including accessibility services, exercise apps, navigation-cues, and reading-aloud apps.

Text-to-speech turns text into natural-sounding speech across more than 50 languages powered by Google’s machine learning (ML) technology. The new text-to-speech engine on Wear OS uses smaller and more efficient prosody ML models to bring faster synthesis on Wear OS devices.

The new text-to-speech engine does not introduce new APIs to synthesize speech, meaning developers can keep using the previously existing speak method, along with the rest of the methods previously available.

Developers should keep in mind that the new engine takes about 10 seconds to get ready when the app is initialized. Therefore, apps that want to use speech right after launch is completed should initialize the engine as soon as possible by calling TextToSpeech(applicationContext, callback) and synthesize the desired text from the passed callback.

An additional caveat concerns the possibility that the new engine can synthesize speech in a language other than the user’s preferred language. This may happen, for example, when sending an emergency call, in which case the language corresponding to the actual locale the user is in is preferred over the user’s chosen UI language.

The new text-to-speech engine can be used on devices running Wear OS 4, released last July, or higher.

Besides text-to-speech synthesis, Wear OS also provides a speech recognition service through the SpeechRecognizer API, which is though not appropriate for continuous recognition since it relies on remote services.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2 Stocks That Will Surge on the Next Wave of AI – The Globe and Mail

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

It’s been a little more than 15 months since ChatGPT launched, and it’s clear who the early winners of the AI revolution are.

AI hardware stocks like Nvidia have been far and away the leaders in the new tech boom. Nvidia’s GPUs are the core component required for running intense models like ChatGPT, and demand for them has been enormous, driving Nvidia’s revenue up by more than 200% and its profits by an even greater multiple. Companies that partner with Nvidia to sell hardware have also emerged as winners. Those include Super Micro Computer, which specializes in selling high-density servers and storage equipment that work well for running AI applications; Arm Holdings, which licenses its power-conserving chip designs to Nvidia to use for running AI models; and Oracle, which has seen strong growth in its cloud infrastructure business as demand for Nvidia-based superclusters jumps.

Even other chip stocks, like AMD and Intel, have soared in anticipation of spiking demand even as those companies have yet to see significant revenue growth from the AI boom.

However, the cloud infrastructure giants, AI start-ups, and others aren’t stocking up GPUs to hoard them. They’re aiming to run new applications and software programs, and it’s a good bet that software companies will be the next winners in the AI revolution. Keep reading to see two that could capitalize on the new tech boom.

A robot holding a table with an upward stock chart coming out of it.

Image source: Getty Images.

1. MongoDB

MongoDB (NASDAQ: MDB) has risen to prominence as a leader in NoSQL databases. It helps organizations organize data that doesn’t conform to a strict spreadsheet grid.

As a database tool, there’s a natural overlap between MongoDB’s utility and the potential of generative AI, which makes it easier to find information, apply it, run models, or transform it as needed.

MongoDB has begun incorporating some generative AI features into its software. For example, in December, the company launched Vector Search, which uses generative AI to help its customers build applications with MongoDB data. However, the biggest tailwinds from AI are still yet to come for MongoDB.

On the recent earnings call, CEO Dev Ittycheria walked investors through the implications of AI for the business, saying, “It’s important to understand that there are three layers to the AI stack. The first layer is the underlying compute and LLMs, the second layer is the fine-tuning of the models and building of AI applications, and the third layer is deploying and running applications that end users interact with.”

MongoDB operates in the second and third layers of that stack, and Ittycheria said that MongoDB’s customers are still experimenting with their AI applications. As experimentation moves to action, the company looks well positioned to benefit from an uptick in demand once businesses are confident that they can unlock the power of AI. That could take several quarters, but the company is likely to be a long-term winner from AI. In the meantime, MongoDB is still growing rapidly, with revenue up 27% in its most recent quarter.

2. Duolingo

Duolingo (NASDAQ: DUOL), the leading language-learning app, overcame some early concerns that AI could disrupt its business model rather than complement it.

Last spring, the stock briefly fell when education platform Chegg said it was losing customers to ChatGPT, but investors have since realized that Duolingo looks to be a winner from AI, and its shares have surged over the last few months as it’s delivered impressive growth.

Duolingo has moved quickly to incorporate AI tools into its app. A year ago, it unveiled a new conversation mode built on OpenAI’s GPT4, which served as the foundation of its new highest-tier product, Duolingo Max. The company has also begun using AI to create sentences for its lessons and has laid off some contractors as it relies more on AI and less on humans.

Looking ahead, it’s easy to imagine how Duolingo can more fully capitalize on the potential of generative AI. It could develop an AI conversation partner so users can practice a new language in a real conversation. It could offer AI-based customizable lessons so users can focus on context or a certain set of vocabulary depending on their needs, and it can use AI to accelerate its expansion beyond languages into areas like early literacy, math, and music.

Doing so will not only help Duolingo reach more users at a lower cost, but it could also help it gain greater adoption in K-12 and university education, tapping into a potentially highly lucrative revenue stream.

Duolingo’s leadership is well versed in AI and technology, as CEO Luis von Ahn had previously sold a reverse image search technology to Alphabet‘s Google, and also helped develop CAPTCHA and ReCAPTCHA, tools that prevent robots from logging into a website.

Duolingo stock is pricey, but the company is growing quickly, its profitability is improving, and generative AI significantly expands its addressable market.

Should you invest $1,000 in MongoDB right now?

Before you buy stock in MongoDB, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and MongoDB wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service has more than tripled the return of S&P 500 since 2002*.

See the 10 stocks

*Stock Advisor returns as of March 18, 2024

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Jeremy Bowman has positions in MongoDB. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Duolingo, MongoDB, Nvidia, and Oracle. The Motley Fool recommends Chegg and Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short May 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2 Stocks That Will Surge on the Next Wave of AI | The Motley Fool

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

It’s been a little more than 15 months since ChatGPT launched, and it’s clear who the early winners of the AI revolution are.

AI hardware stocks like Nvidia have been far and away the leaders in the new tech boom. Nvidia’s GPUs are the core component required for running intense models like ChatGPT, and demand for them has been enormous, driving Nvidia’s revenue up by more than 200% and its profits by an even greater multiple. Companies that partner with Nvidia to sell hardware have also emerged as winners. Those include Super Micro Computer, which specializes in selling high-density servers and storage equipment that work well for running AI applications; Arm Holdings, which licenses its power-conserving chip designs to Nvidia to use for running AI models; and Oracle, which has seen strong growth in its cloud infrastructure business as demand for Nvidia-based superclusters jumps.

Even other chip stocks, like AMD and Intel, have soared in anticipation of spiking demand even as those companies have yet to see significant revenue growth from the AI boom.

However, the cloud infrastructure giants, AI start-ups, and others aren’t stocking up GPUs to hoard them. They’re aiming to run new applications and software programs, and it’s a good bet that software companies will be the next winners in the AI revolution. Keep reading to see two that could capitalize on the new tech boom.

A robot holding a table with an upward stock chart coming out of it.

Image source: Getty Images.

1. MongoDB

MongoDB (MDB -0.10%) has risen to prominence as a leader in NoSQL databases. It helps organizations organize data that doesn’t conform to a strict spreadsheet grid.

As a database tool, there’s a natural overlap between MongoDB’s utility and the potential of generative AI, which makes it easier to find information, apply it, run models, or transform it as needed.

MongoDB has begun incorporating some generative AI features into its software. For example, in December, the company launched Vector Search, which uses generative AI to help its customers build applications with MongoDB data. However, the biggest tailwinds from AI are still yet to come for MongoDB.

On the recent earnings call, CEO Dev Ittycheria walked investors through the implications of AI for the business, saying, “It’s important to understand that there are three layers to the AI stack. The first layer is the underlying compute and LLMs, the second layer is the fine-tuning of the models and building of AI applications, and the third layer is deploying and running applications that end users interact with.”

MongoDB operates in the second and third layers of that stack, and Ittycheria said that MongoDB’s customers are still experimenting with their AI applications. As experimentation moves to action, the company looks well positioned to benefit from an uptick in demand once businesses are confident that they can unlock the power of AI. That could take several quarters, but the company is likely to be a long-term winner from AI. In the meantime, MongoDB is still growing rapidly, with revenue up 27% in its most recent quarter.

2. Duolingo

Duolingo (DUOL -0.44%), the leading language-learning app, overcame some early concerns that AI could disrupt its business model rather than complement it.

Last spring, the stock briefly fell when education platform Chegg said it was losing customers to ChatGPT, but investors have since realized that Duolingo looks to be a winner from AI, and its shares have surged over the last few months as it’s delivered impressive growth.

Duolingo has moved quickly to incorporate AI tools into its app. A year ago, it unveiled a new conversation mode built on OpenAI’s GPT4, which served as the foundation of its new highest-tier product, Duolingo Max. The company has also begun using AI to create sentences for its lessons and has laid off some contractors as it relies more on AI and less on humans.

Looking ahead, it’s easy to imagine how Duolingo can more fully capitalize on the potential of generative AI. It could develop an AI conversation partner so users can practice a new language in a real conversation. It could offer AI-based customizable lessons so users can focus on context or a certain set of vocabulary depending on their needs, and it can use AI to accelerate its expansion beyond languages into areas like early literacy, math, and music.

Doing so will not only help Duolingo reach more users at a lower cost, but it could also help it gain greater adoption in K-12 and university education, tapping into a potentially highly lucrative revenue stream.

Duolingo’s leadership is well versed in AI and technology, as CEO Luis von Ahn had previously sold a reverse image search technology to Alphabet‘s Google, and also helped develop CAPTCHA and ReCAPTCHA, tools that prevent robots from logging into a website.

Duolingo stock is pricey, but the company is growing quickly, its profitability is improving, and generative AI significantly expands its addressable market.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Jeremy Bowman has positions in MongoDB. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Duolingo, MongoDB, Nvidia, and Oracle. The Motley Fool recommends Chegg and Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short May 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GitHub Announces Upgrade to Action Runners with 4-vCPU, 16 GiB Memory

MMS Founder
MMS Aditya Kulkarni

Article originally posted on InfoQ. Visit InfoQ

GitHub recently announced an enhancement to their GitHub Actions-hosted runners. Moving forward, workflows from public repositories on Linux or Windows that utilize GitHub’s default labels will be executed on the new 4-vCPU runners.

Larissa Fortuna, Product Manager at GitHub talked about the upgraded infrastructure in a blog post. This enhanced infrastructure allows for a performance increase of up to 25% on most CI/CD tasks, without needing any changes to configurations. Fortuna noted that GitHub Actions has been available at no cost for public repositories since its launch in 2019. Despite its widespread adoption by open source communities, the platform has consistently utilized the same 2-vCPU based virtual machines.

Starting December 1, 2023, GitHub started upgrading the Linux and Windows Action runners to 4-vCPU virtual machines with 16 GiB of memory, doubling their previous capacity. These enhancements will lead to quicker feedback on the pull requests and reduce the wait time for builds to complete. Teams handling larger workloads will benefit from machines that offer double the memory.

GitHub Actions has become essential for open-source projects that offer free, easy-to-use automation and build servers. This rapid adoption is also due to the open-source community and the GitHub Marketplace, which houses over 20,000 actions and apps. This allows developers of all organization sizes to enhance their workflows with GitHub Actions and Apps.

Earlier this year, GitHub made headlines as it integrated AI to enhance the developer experience on its platform, and offered interactive AI features through GitHub Copilot Enterprise and GitHub Copilot Chat. These tools, including a context-aware AI assistant powered by GPT-4, enable users to interact with Copilot directly in their browser and within their Integrated Development Environment (IDE), assisting with a wide range of tasks from explaining coding concepts to identifying security vulnerabilities and writing unit tests.

Martin Woodward, GitHub’s Vice President of Developer Relations, commented on the GitHub Actions-hosted runners, mentioning,

“GitHub is the home for the open source community, so I’m thrilled we’ve been able to give all open source projects access to Linux and Windows machines that have twice the v-CPUs, twice the memory and 10 times the storage for their builds, for free. This investment will give valuable time back to the maintainers of open source projects who benefited from over 7 billion build minutes with GitHub Actions in the past year alone.”

To start using the upgraded infrastructure, public repositories will need to execute the public workflows with any of the current Ubuntu or Windows labels, and they will automatically operate on the new 4-core hosted runners.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: New JEP Drafts, Infinispan 15, Payara Platform, Alpaquita Containers with CRaC

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for March 11th, 2024 features news highlighting: new JEP drafts, Stream Gatherers (Second Preview) and Hot Code Heap; Infinispan 15; the March 2024 edition of Payara Platform; Alpaquita Containers with CRaC; the first release candidate of JobRunr 7.0; and milestone and point releases for Spring projects, Quarkus, Helidon and Micronaut.

OpenJDK

Viktor Klang, Software Architect, Java Platform Group at Oracle, has introduced JEP Draft 8327844, Stream Gatherers (Second Preview), that proposes a second round of preview from the previous round, namely JEP 461, Stream Gatherers (Preview), to be delivered in the upcoming release of JDK 22. This will allow additional time for feedback and more experience with this feature with no user-facing changes over JEP 461. This feature was designed to enhance the Stream API to support custom intermediate operations that will “allow stream pipelines to transform data in ways that are not easily achievable with the existing built-in intermediate operations.” More details on this JEP may be found in the original design document and this InfoQ news story.

Dmitry Chuyko, Performance Architect at BellSoft, has introduced JEP Draft 8328186, Hot Code Heap, that proposes to “extend the segmented code cache with a new optional ‘hot’ code heap to compactly accommodate a part of non-profiled methods, and to extend the compiler control mechanism to mark certain methods as ‘hot’ so that they compile into the hot code heap.”

JDK 23

Build 14 of the JDK 23 early-access builds was made available this past week featuring updates from Build 13 that include fixes for various issues. More details on this release may be found in the release notes.

JDK 22

Build 36 remains the current build in the JDK 22 early-access builds. Further details on this build may be found in the release notes.

For JDK 23 and JDK 22, developers are encouraged to report bugs via the Java Bug Database.

BellSoft

BellSoft has released their Alpaquita Containers product, which includes Alpaquita Linux and Liberica JDK, with support for Coordinated Restore at Checkpoint (CRaC). Performance measurements reveal a 164x faster startup and 1.1x smaller images. InfoQ will follow up with a more detailed news story.

Spring Framework

Versions 6.1.5, 6.0.18 and 5.3.33 of Spring Framework have been released to primarily address CVE-2024-22259, Spring Framework URL Parsing with Host Validation (2nd report), a vulnerability in which applications that use the UriComponentsBuilder class to parse an externally provided URL and perform validation checks on the host of the parsed URL, may be vulnerable to an open redirect attack or a server-side-request forgery attack if the URL is used after passing validation checks. New features include: allow the UriTemplate class to be built with an empty template; and a refinement of the getContentLength() method defined in classes that implement the HttpMessageConverter interface to return value null safety. These versions will be shipped with the upcoming releases of Spring Boot 3.2.4 and Spring Boot 3.1.10, respectively. More details on these releases may be found in the release notes for version 6.1.5, version 6.0.18 and version 5.3.33.

The second milestone release of Spring Data 2024.0.0 provides new features: predicate-based QueryEngine for Spring Data Key-Value; and a transaction option derivation for MongoDB based on @Transactional annotation labels. There were also upgrades to sub-projects such as: Spring Data Commons 3.3.0-M2; Spring Data MongoDB 4.3.0-M2; Spring Data Elasticsearch 5.3.0-M2; and Spring Data Neo4j 7.3.0-M2. More details on this release may be found in the release notes.

Similarly, versions 2023.1.4 and 2023.0.10 of Spring Data have been released providing bug fixes and respective dependency upgrades to sub-projects such as: Spring Data Commons 3.2.4 and 3.1.10; Spring Data MongoDB 4.2.4 and 4.1.10; Spring Data Elasticsearch 5.2.4 and 5.1.10; and Spring Data Neo4j 7.2.4 and 7.1.10. These versions may also be consumed by the upcoming releases of Spring Boot 3.2.4 and 3.1.10, respectively.

The release of Spring AI 0.8.1 delivers new features such as support for: the Google Gemini AI model; VertexAI Gemini Chat; Gemini Function Calling; and native compilation of Gemini applications. More details on this release may be found in the list of issues.

Payara

Payara has released their March 2024 edition of the Payara Platform that includes Community Edition 6.2024.3 and Enterprise Edition 6.12.0. Both editions feature notable changes such as: more control in system package selection with Apache Felix that streamlines configuration and reduces potential conflicts; a resolution for generated temporary files in the /tmp folder upon deployment that weren’t getting deleted; and improved reliability to set the correct status when restarting a server instance from the Admin UI. More details on these releases may be found in the release notes for Community Edition 6.2024.3 and Enterprise Edition 6.12.0.

Micronaut

The Micronaut Foundation has released version 4.3.6 of the Micronaut Framework featuring Micronaut Core 4.3.11, bug fixes, improvements in documentation, and updates to modules: Micronaut Serialization, Micronaut Azure, Micronaut RxJava 3 and Micronaut Validation. Further details on this release may be found in the release notes.

Quarkus

Quarkus 3.2.11.Final, a maintenance LTS release, ships with dependency upgrades and security fixes to address:

  • CVE-2024-25710, a denial of service caused by an infinite loop for a corrupted DUMP file.
  • CVE-2024-1597, a PostgreSQL JDBC Driver vulnerability that allows an attacker to inject SQL if using PreferQueryMode=SIMPLE.
  • CVE-2024-1023, a memory leak due to the use of Netty FastThreadLocal data structures in Vert.x.
  • CVE-2024-1300, a memory leak when a TCP server is configured with TLS and SNI support.
  • CVE-2024-1726 security checks for some inherited endpoints performed after serialization in RESTEasy Reactive may trigger a denial of service

More details on this release may be found in the changelog.

Helidon

The release of Helidon 4.0.6 provides notable changes such as: support for injecting instances of the UniversalConnectionPool interface; a deprecation of the await(long, TimeUnit) method defined in the Awaitable interface in favor of await(Duration); and enhance the Status class with additional standard HTTP status codes, namely: 203, Non-Authoritative Information; 207, Multi-Status; 507, Insufficient Storage; 508, Loop Detected; 510, Not Extended; and 511, Network Authentication Required. More details on this release may be found in the changelog.

Infinispan

Red Hat has released version 15.0.0 of Infinispan that delivers new features such as: a JDK 17 baseline; support for Jakarta EE; a connector to the Redis Serialization Protocol; support for distributed vector indexes and KNN queries; and improved tracing subsystem. More details on this release may be found in the release notes and InfoQ will follow up with a more detailed news story.

Micrometer

Version 1.13.0-M2 of Micrometer Metrics 1.13.0 delivers bug fixes, dependency upgrades and new features such as: align the JettyClientMetrics class with Jetty 12; support for Prometheus 1.x; and support for the @Counted annotation on classes with an update on the CountedAspect class to handle when @Counted annotates a class. More details on this release may be found in the release notes.

Similarly, versions 1.12.4 and 1.11.10 of Micrometer Metrics ship with bug fixes, dependency upgrades and a new feature in which the INSTANCE field, defined in the DefaultHttpClientObservationConvention class, was declared as final as being non-final seemed to be accidental. More details on these releases may be found in the release notes for version 1.12.4 and version 1.11.10.

Versions 1.3.0-M2, 1.2.4 and 1.1.11 of Micrometer Tracing provides bug fixes, dependency upgrades to Micrometer Metrics 1.13.0-M2, 1.12.4 and 1.11.10, respectively and a completed task that excludes the benchmarks module from the BOM because they are not published. More details on these releases may be found in the version 1.3.0-M2, version 1.2.4 and version 1.1.11.

Project Reactor

Project Reactor 2023.0.4, the fourth maintenance release, provides dependency upgrades to reactor-core 3.6.4 and reactor-netty 1.1.17. There was also a realignment to version 2023.0.4 with the reactor-kafka 1.3.23, reactor-pool 1.0.5, reactor-addons 3.5.1 and reactor-kotlin-extensions 1.2.2 artifacts that remain unchanged. More details on this release may be found in the changelog.

Next, Project Reactor 2022.0.17, the seventeenth maintenance release, provides dependency upgrades to reactor-core 3.5.15 and reactor-netty 1.1.17. There was also a realignment to version 2022.0.17 with the reactor-kafka 1.3.23, reactor-pool 1.0.5, reactor-addons 3.5.1 and reactor-kotlin-extensions 1.2.2 artifacts that remain unchanged. Further details on this release may be found in the changelog.

And finally, the release of Project Reactor 2020.0.42, codenamed Europium-SR42, provides dependency upgrades to reactor-core 3.4.36 and reactor-netty 1.0.43 and. There was also a realignment to version 2020.0.42 with the reactor-kafka 1.3.23, reactor-pool 0.2.12, reactor-addons 3.4.10, reactor-kotlin-extensions 1.1.10 and reactor-rabbitmq 1.5.6 artifacts that remain unchanged. More details on this release may be found in the changelog.

Apache Software Foundation

Versions 5.0.0-alpha-7 and 4.0.20 of Apache Groovy feature bug fixes, dependency upgrades and an improvement to the getMessage() method defined in the MissingMethodException class to eliminate truncating the error message at 60 characters. More details on these releases may be found in the release notes for version 5.0.0-alpha-7 and version 4.0.20.

The release of Apache Camel 4.4.1 provides bug fixes, dependency upgrades and notable improvements such as: support for the ${} expressions with dependencies in Camel JBang; and port validation in Camel GRPC should check if a port was specified. More details on this release may be found in the release notes.

Eclipse Foundation

Version 4.5.5 of Eclipse Vert.x has been released delivering notable changes such as: a deprecation of the toJson() method defined in the Buffer interface in favor of toJsonValue(); a resolution to an OutOfMemoryException after an update to the Certificate Revocation List; and a new requirement that an implementation of the CompositeFuture interface must unregister itself against its component upon completion. More details on this release may be found in the release notes and list of deprecations and breaking changes.

The release of Eclipse Mojarra 4.0.6, the compatible implementation to the Jakarta Faces specification, ships with notable changes such as: ensure that the getViews() method defined in the ViewHandler class also returns programmatic facelets; and removal of the SKIP_ITERATION enumeration as it was superseded by the VisitHint enumeration. More details on this release may be found in the release notes.

Piranha

The release of Piranha 24.2.0 delivers notable changes to the Piranha CLI such as: the ability to generate a macOS GraalVM binary; and the addition of a version and coreprofile sub-commands. Further details on this release may be found in their documentation and issue tracker.

JobRunr

The first release candidate of JobRunr 7.0.0, a library for background processing in Java that is distributed and backed by persistent storage, features bug fixes, enhancements and new features: built-in support for virtual threads that are enabled by default when using JDK 21; and the InMemoryStorageProvider class now allows for a poll interval as small as 200ms that is useful for testing. Breaking changes include: the delete(String id) method in the JobScheduler class has been renamed to deleteRecurringJob(String id); and updates to the StorageProvider interface and the Page and PageRequest classes that include new features. More details on this release may be found in the release notes.

JBang

Version 0.115.0 of JBang delivers bug fixes and new features such as: arguments passed to an alias are now appended (instead of being replaced) to any arguments that are already defined in the alias; and support for specifying system properties when using the jbang app install command. More details on this release may be found in the release notes.

LangChain4j

Version 0.28.0 of LangChain for Java (LangChain4j) provides many bug fixes, new integrations with Anthropic and Zhipu AI, and notable updates such as: a new Filter API to support embedded stores like Milvus and Pinecone; the ability to load recursively and with glob/regex filtering with the FileSystemDocumentLoader class; and an implementation of the missing parameters in the rest of the Azure OpenAI APIs with a clean up of the existing responseFormat parameter so that all parameters are in the consistently in the same order. Further details on this release may be found in the release notes.

Java Operator SDK

The release of Java Operator SDK 4.8.1 features dependency upgrades and notable improvements such as: explicit use of the fixed thread pool; add logging for tracing issues with events; and changes to the primary to secondary index edge case for the dynamic mapper in the DefaultPrimaryToSecondaryIndex class. More details on this release may be found in the release notes.

Gradle

The third release candidate of Gradle 8.7 provides continuous improvement in: support for Java 22 for compiling, testing, and running JVM-based projects; build cache improvements for Groovy DSL script compilation; and improvements to lazy configuration, error and warning messages, the configuration cache, and the Kotlin DSL. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Thinking about trading options or stock in NVIDIA, Mongodb, MicroStrategy, Carnival, or AT&T?

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, March 18, 2024 /PRNewswire/ — InvestorsObserver issues critical PriceWatch Alerts for NVDA, MDB, MSTR, CCL, and T.

InvestorsObserver (PRNewsfoto/InvestorsObserver)

Click a link below then choose between in-depth options trade idea report or a stock score report.

Options Report – Ideal trade ideas on up to seven different options trading strategies. The report shows all vital aspects of each option trade idea for each stock.

Stock Report – Measures a stock’s suitability for investment with a proprietary scoring system combining short and long-term technical factors with Wall Street’s opinion including a 12-month price forecast.

  1. NVDA: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=NVDA&prnumber=202403183
  2. MDB: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=MDB&prnumber=202403183
  3. MSTR: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=MSTR&prnumber=202403183
  4. CCL: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=CCL&prnumber=202403183
  5. T: https://www.investorsobserver.com/lp/pr-options-lp-2/?stocksymbol=T&prnumber=202403183

(Note: You may have to copy this link into your browser then press the [ENTER] key.)

InvestorsObserver provides patented technology to some of the biggest names on Wall Street and creates world-class investing tools for the self-directed investor on Main Street. We have a wide range of tools to help investors make smarter decisions when investing in stocks or options.

 

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/thinking-about-trading-options-or-stock-in-nvidia-mongodb-microstrategy-carnival-or-att-302091459.html

SOURCE InvestorsObserver

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.