6 Tracks Not To Miss at QCon San Francisco, October 2-6, 2023: ML, Architecture, Resilience & More!

MMS Founder
MMS Artenisa Chatziou

Article originally posted on InfoQ. Visit InfoQ

In 2023, the software development community faces many challenges. The macroeconomic environment is forcing everyone to “do more with less”, new privacy and data management laws have been enacted, and generative AI is creating both new opportunities and unforeseen issues. At InfoQ’s international software development conference, QCon San Francisco (October 2-6), senior software practitioners driving innovation and change in software development will explore real-world architectures, technology, and techniques to help you solve such challenges.

Behind QCon San Francisco is a collective of distinguished senior software leaders who carefully curate the QCon topics based on the crucial trends and essential best practices you need to know about. This year, the conference will contain 12 tracks. If you’re a senior software developer, software architect, or tech team lead, you won’t want to miss these stand-out tracks.

QCon San Francisco Tracks for the Architect, Developer, and Technical Leader

Modern ML: GenAI, Trust, & Path2Prod

Modern machine learning (ML) is a rapidly evolving field. Hien Luu, Sr. Engineering Manager @DoorDash and author of Beginning Apache Spark 3, will explore the latest trends in ML, focusing on three key areas: generative AI (GenAI), trust, and the path to production. You’ll learn how these areas are shaping the future of ML and how they can be used to build more powerful and reliable ML systems.

Architecting for the Cloud

Architecting for the cloud is a complex and challenging task, but it can also be very rewarding. Khawaja Shams, Co-Founder & CEO @Momento, previously @NASA and @Amazon, will explore the key principles of cloud architecture, such as scalability, elasticity, and security. Join us to learn how to design and build cloud-based systems that are reliable, efficient, and cost-effective.

Designing for Resilience

This track will explore the principles of designing systems that are resilient to disruptions. Javier Fernandez-Ivern, Staff Software Engineer @Netflix with over 20 years in Software Engineering, will share insights on how to design systems that can withstand unexpected events, such as natural disasters, cyberattacks, or hardware failures.

Architectures You’ve Always Wondered About

This track will explore real-world examples of innovative companies pushing the limits with modern software systems. Wes Reisz, Technical Principal @thoughtworks and Creator/Co-host of the InfoQ podcast, will share their stories of how they have scaled their systems to handle massive amounts of traffic, data, and complexity.

Platform Engineering Done Well

If done well, platform engineering can help organizations to deliver software faster, more reliably, and more securely. Daniel Bryant, Java Champion, Co-author of “Mastering API Architecture”, Independent Technical Consultant, and InfoQ News Manager, will discuss the best practices for building and maintaining a successful platform to create one that is scalable, adaptable, and secure.

Modern Data Engineering & Architectures

In this track, Sid Anand, Chief Architect @Datazoom, Committer/PMC Apache Airflow, Previously: Netflix, LinkedIn, eBay, Etsy, & PayPal, will deep-dive into innovations happening in the data engineering space, including data streaming (stream processing, stream warehouse, stream graph), data API’s (gRPC, GraphQL, REST), data lineage, automated data pipelines, and cloud data platforms and containerization technologies.

Explore the full list of tracks that you can learn from at QCon San Francisco 2023 covering frontend development, deep-tech, JVM, languages of Infra, staff plus engineering, and more.

Why Attend QCon San Francisco?

  • Walk away with actionable insights: Learn how your peers are solving similar complex challenges right now.
  • Level-up with senior software leaders: Learn what’s next from world-class leaders pushing the boundaries.
  • No hype. No hidden marketing. No sales pitches: At QCon, there are no product pitches or hidden marketing. Your time is focused on learning, exploring, and researching, not being sold to.
  • Future-proof your career: Experience practical hands-on workshops where you can deep-dive into specific topics and learn in an intimate setting. Book your workshop days for under $920!

Join QCon San Francisco this October 2-6, and take advantage of the last early bird tickets. Don’t miss the chance to learn from software leaders at early adopter companies. Save valuable time without wasting resources. Get the assurance you are adopting the right technologies and skills.

Book your spot now!

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Holdings Lowered by Principal Financial Group Inc.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Principal Financial Group Inc. trimmed its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 16.4% during the first quarter, according to its most recent filing with the Securities and Exchange Commission. The institutional investor owned 7,068 shares of the company’s stock after selling 1,384 shares during the quarter. Principal Financial Group Inc.’s holdings in MongoDB were worth $1,648,000 as of its most recent filing with the Securities and Exchange Commission.

A number of other hedge funds and other institutional investors also recently made changes to their positions in the stock. Cherry Creek Investment Advisors Inc. raised its position in shares of MongoDB by 1.5% during the 4th quarter. Cherry Creek Investment Advisors Inc. now owns 3,283 shares of the company’s stock worth $646,000 after buying an additional 50 shares in the last quarter. CWM LLC raised its position in shares of MongoDB by 2.4% during the 1st quarter. CWM LLC now owns 2,235 shares of the company’s stock worth $521,000 after buying an additional 52 shares in the last quarter. Cetera Advisor Networks LLC raised its position in shares of MongoDB by 7.4% during the 2nd quarter. Cetera Advisor Networks LLC now owns 860 shares of the company’s stock worth $223,000 after buying an additional 59 shares in the last quarter. First Republic Investment Management Inc. raised its holdings in shares of MongoDB by 1.0% in the 4th quarter. First Republic Investment Management Inc. now owns 6,406 shares of the company’s stock worth $1,261,000 after purchasing an additional 61 shares in the last quarter. Finally, Janney Montgomery Scott LLC raised its holdings in shares of MongoDB by 4.5% in the 4th quarter. Janney Montgomery Scott LLC now owns 1,512 shares of the company’s stock worth $298,000 after purchasing an additional 65 shares in the last quarter. 88.89% of the stock is currently owned by institutional investors.

Insider Buying and Selling

In related news, Director Hope F. Cochran sold 2,174 shares of the stock in a transaction that occurred on Thursday, June 15th. The stock was sold at an average price of $373.19, for a total value of $811,315.06. Following the transaction, the director now owns 8,200 shares in the company, valued at $3,060,158. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is available at this link. In other MongoDB news, Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction that occurred on Tuesday, July 18th. The stock was sold at an average price of $420.00, for a total transaction of $420,000.00. Following the transaction, the director now owns 1,213,159 shares in the company, valued at $509,526,780. The sale was disclosed in a filing with the SEC, which is available at the SEC website. Also, Director Hope F. Cochran sold 2,174 shares of the stock in a transaction that occurred on Thursday, June 15th. The shares were sold at an average price of $373.19, for a total value of $811,315.06. Following the transaction, the director now owns 8,200 shares in the company, valued at approximately $3,060,158. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 76,551 shares of company stock worth $31,143,942. Insiders own 4.80% of the company’s stock.

MongoDB Trading Up 3.0 %

NASDAQ:MDB opened at $392.88 on Tuesday. The stock’s 50 day moving average price is $390.24 and its 200 day moving average price is $306.34. MongoDB, Inc. has a 52 week low of $135.15 and a 52 week high of $439.00. The company has a market capitalization of $27.73 billion, a P/E ratio of -113.55 and a beta of 1.11. The company has a debt-to-equity ratio of 1.44, a quick ratio of 4.19 and a current ratio of 4.19.

Wall Street Analyst Weigh In

A number of analysts recently commented on the stock. KeyCorp boosted their target price on shares of MongoDB from $372.00 to $462.00 and gave the stock an “overweight” rating in a report on Friday, July 21st. Truist Financial boosted their target price on shares of MongoDB from $420.00 to $430.00 and gave the stock a “buy” rating in a report on Friday. Capital One Financial assumed coverage on shares of MongoDB in a report on Monday, June 26th. They issued an “equal weight” rating and a $396.00 target price on the stock. Stifel Nicolaus boosted their target price on shares of MongoDB from $420.00 to $450.00 and gave the stock a “buy” rating in a report on Friday. Finally, Barclays lifted their target price on shares of MongoDB from $421.00 to $450.00 and gave the stock an “overweight” rating in a research report on Friday. One research analyst has rated the stock with a sell rating, three have assigned a hold rating and twenty have given a buy rating to the stock. Based on data from MarketBeat, the stock has an average rating of “Moderate Buy” and an average price target of $407.39.

Check Out Our Latest Stock Analysis on MongoDB

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Beginners Guide To Retirement Stocks Cover

Click the link below and we’ll send you MarketBeat’s list of seven best retirement stocks and why they should be in your portfolio.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Meta Open-Sources Code Generation LLM Code Llama

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Meta recently open-sourced Code Llama, a code generation LLM which is based on the Llama 2 foundation model and carries the same community license. Code Llama was fine-tuned on 500B tokens of code and is available in three model sizes ranging up to 34B parameters. In evaluations on code-generation benchmarks, the model outperformed all other open-source models and is comparable to ChatGPT.

Meta used three sizes of the Llama 2 foundation model—7B, 13B, and 34B parameters—as starting points for Code Llama. These were fine-tuned on a “near-deduplicated” dataset of code as well as natural language related to code, such as questions and discussions. Meta also trained two variants of each model size, besides the base version: Code Llama – Python, which is further fine-tuned on Python code; and Code Llama – Instruct, which is fine-tuned on natural-language instructions. All nine model versions are licensed for commercial use. According to Meta, 

Code Llama is designed to support software engineers in all sectors – including research, industry, open source projects, NGOs, and businesses. But there are still many more use cases to support than what our base and instruct models can serve…We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products.

InfoQ previously covered other code-generation AI models, including OpenAI’s Codex, which is based on GPT-3 and powers Github’s Copilot. Like the other models in the GPT series, Codex is only available via OpenAI’s web service API. This has prompted the development of open models, such as BigCode’s StarCoder. StarCoder also has the advantage of being trained on “permissively-licensed” code, so that the use of its output is unlikely to result in license violations. While Llama 2 and its derived models, including Code Llama, are licensed for commercial use, the Code Llama license notes that its output “may be subject to third party licenses.”

In addition to fine-tuning the models on code, Meta also performed long context fine-tuning (LCFT), which increases the length of input the model can handle. While Llama 2 was trained on sequences up to 4k tokens, the LCFT for Code Llama includes sequences up to 16k. Meta’s goal for this was “unlocking repository-level reasoning for completion or synthesis,” giving the model access to an entire project’s code instead of only a single function or source file. Meta’s experiments show that the model exhibits “stable behavior” for sequences up to 100k tokens.

In a Twitter/X thread about the model, Furkan Gözükara, an assistant professor at Toros University, noted that GPT-4 still outperformed Code Llama on the HumanEval benchmark. Another user replied that GPT-4 was not “not 34B,” meaning that GPT-4 was a far bigger model. The makers of phind, an AI assistant for programmers, released a fine-tuned version of the 34B parameter version of Code Llama – Python that they claim achieved 69.5% pass@1 score on HumanEval, which outperforms GPT-4’s published score of 67%. One of the developers joined a Hacker News discussion about their release, and said:

This model is only the beginning — it’s an early experiment and we’ll have improvements next week.

The Code Llama source code is available on GitHub. The model files can be downloaded after applying for approval from Meta.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Intra-Account Collection Copy in Azure Cosmos DB for MongoDB in Public Preview

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft recently introduced the public preview of Intra-Account Collection Copy for Azure Cosmos DB for MongoDB, allowing users to copy collections within the same account – offering enhanced data management and migration capabilities.

Azure Cosmos DB is a fully managed NoSQL database with various APIs targeted at NoSQL workloads such as MongoDB and Apache Cassandra, including native NoSQL and compatible APIs. In addition, the service supports relational workloads for PostgreSQL.

Last year the company first introduced the preview for Intra-account container copy jobs to allow users to create offline copies of containers for Azure Cosmos DB for both Core (SQL) API and Cassandra API using Azure CLI. Azure Cosmos DB now also has an Intra-Account Collection Copy feature for MongoDB that, according to the company, “enables users to copy collections within the same Azure Cosmos DB account in an offline manner.”

An account in Cosmos DB contains all the Azure Cosmos DB resources: databases, containers, and items. When creating an account, users can select the MongoDB API. Subsequently, they can add a database and collection (container). Within the account, users can create multiple collections.

The feature for copy collections with a MongoDB account can be helpful for data migrations, like when data has evolved and queries are no longer efficient with the existing shard key. Users can choose another shard key on a new collection and migrate the data using a collection copy. Another use case is updating the unique key index of a container by defining a new unique key index policy and migrating data to the new collection using a collection copy.

Users can register for the preview feature and install the Azure Cosmos DB preview extension through the CLI to migrate a collection or database. Next, choose the source and destination collection where they want to copy the data and start the collection copy operation from the Azure CLI. Lastly, users can monitor the progress.

A job to copy a container within an Azure Cosmos DB API for a MongoDB account:

az cosmosdb dts copy `
    --resource-group $resourceGroup `
    --account-name $accountName `
    --job-name $jobName `
    --source-mongo database=$sourceDatabase collection=$sourceCollection `
    --dest-mongo database=$destinationDatabase collection=$destinationCollection

Several other cloud database services support MongoDB. One is MongoDB Atlas, MongoDB’s own fully managed cloud database service. This database service also supports data migration from one database to another, comparable to the Cosmos DB for MongoDB’s latest feature, Inter-Account Collection copy. With MongoDB Atlas users can bring data from existing MongoDB deployments, JSON, or CSV files into deployments in Atlas using either live migration, where Atlas assists them, or tools for a self-guided migration of data from their existing deployments into Atlas.

Lastly, the documentation provides a list of the Azure regions supporting the feature.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Taming Configuration Complexity Made Fun with CUE

MMS Founder
MMS Marcel van Lohuizen

Article originally posted on InfoQ. Visit InfoQ

Transcript

Van Lohuizen: A little over 20 years ago, fresh out of university, I moved to the Bay Area to work at a startup where I continued working on natural language processing technology, the subject of my PhD. These were turbulent times, the dot-com crash was in full swing, which meant many closed establishments, empty office buildings, and parking lots. Traffic was even easy. The startup I joined would soon be a victim of this crash as well. These were also exciting times, Mac OS X 10.0 was just released, which was exciting for me as a NeXTSTEP fan. Also, the first Apple Stores were open. What was also exciting was the technology I worked on at the startup. As I later realized, we were essentially by today’s standards, in configuration heaven. All the words, grammars, ontologies that were maintained by this company were specified declaratively in this beautifully tailor-made system. It was worked on by multiple teams spread across universities and companies. Teams consisted of engineers and non-engineers, linguists. This was all at the scale that rivals the largest configurations I’ve seen at Google, my then future and now former employer. Later, I even realized that the properties of these configurations are actually strikingly similar to today’s cloud configuration. There didn’t seem to be any problem with this approach at all. It was fast, scalable, and manageable. It was even a joy to work with, and sometimes even with bouts of ecstasy. I don’t know about you, but I don’t know many people that speak of configuration that way nowadays.

2002 – 2005 (Google)

After the startup fell prey to the dot-com crash, I started at Google. I made some resource hungry changes to the search engine for my starter project there. I needed bigger and more machines for testing. My best bet was to use a pool of machines for batch processing that were sitting idle, owned by my team. These machines were very different from the production machines, and I needed to configure everything from scratch. I even needed to adapt the code for these servers to be able to run on these machines at all. Also, I wasn’t a big fan of the existing production configuration setup, nobody was really. I thought this was a good opportunity to introduce the configuration techniques I learned from my previous company. What I ended up with was essentially the first Kubernetes-like system I’m aware of. The system became rapidly popular within Google, and even was, to my horror, used to launch some beta, but still production services. This was not production ready. As a result of all this, though, I became part of the Borg team, which later inspired Kubernetes, with a goal to build something very similar but production ready. I was enthusiastic about configuration and wanted to replicate that experience I had with my previous company. Then, also, based on the past experience with Google, I was told that the system would have to not allow complex code, and only allow at most one layer of overrides. I thought, that shouldn’t be too hard. The system I used to work with had no overrides at all and no code, so it should be easy. I ended up creating GCL, which is something like JSON. Initially, it had some of the properties, but really, very soon, it didn’t. Needless to say, we did not end up in this configuration Nirvana at all. Clearly, we took some wrong turns along the way.

Background

At the end of 2005, I moved to work for Google, Switzerland. Here I did all sorts of things, including management, a 10-year stint on the Go team, and doing research as part of the SRE group into configuration related outages. During that time, I always kept an eye on configuration. It always bugged me that I wasn’t able to replicate that great experience in my previous line of work. In my mind, I was already pretty convinced where I made mistakes. Over the years, I saw validation that my suspicions were indeed correct. At some point, I started working on CUE to incorporate my lessons learned as well as those from others in the field. Since last year, I left Google and started working on CUE full time. My name is Marcel van Lohuizen.

The Need for Validation

Overall, I think there’s a lot of improvement to be gained in what we call configuration engineering. This is what we aim for with CUE. Why is it important at all? Who cares, one might ask? You don’t like how writing configuration is today, and you don’t like it as much as you used to. Life is hard, deal with it. Research from CloudTruth, and others as well, show that for many companies, over 50% of outages are configuration related. We see that configuration at scale is often cracking in its seams. Where did we go wrong? There are actually many lessons to be learned. I want to go in-depth on a few of those. One of those is the lack of validation, or testing of configuration. Here’s a quote from my former colleague Jaana, stressing the importance as well.

Let’s dive into an example to show the need of validation. We have a service that we want to run in both staging and production. We have two self-contained YAML files that define how to run them on Kubernetes. They’re defined as a deployment, which is a Kubernetes concept of how to run on a server. In this example, we only want to vary the number of replicas between the two environments. Everything else which can be quite a bit actually is not shown here, and it’s identical between these two files. Updating common values between these two files can get old very quickly. As we know from both software engineering and database design, redundancy, aside from being tedious, is also error prone. How can we fix this? A common pattern is to have a base file which contains all common fields, and then to have the two specific environments derived from that. Also, here are the various ways to accomplish this. Here we see a bit more detailed, but still see a greatly simplified deployment. As a reminder, we only care about replicas here. One approach is to templatize the base template using variables. This can work quite well. There are some limitations in scalability, and the variable pattern may suffer from the typical issues associated with parameterization. There are some good publications on that topic, but it certainly can work for moderate setups.

For examples, though, we want to focus on another common approach using overrides. Samples of override approaches are, use by kustomize, GCL, and JSON. For this example, we’ll use kustomize. Kustomize allows you a way to customize base configurations into environment specific versions using only YAML files. There are two kinds of YAML files, one that follow the structure of the actual configuration, and the metafiles that explain how to combine these files together. This is a typical kustomize directory layout. The base directory contains all the configurations on which our environment specific kustomizations are based. The kustomization YAML file defines which files are part of that configuration. Then there is the overlays directory which contains subdirectory for each of the environment specific tailoring.

Let’s see how that looks like. At the top, you see the content files for base and production, while at the bottom you see the corresponding metafiles describing how these files are combined. Notice that the metafiles only describe how to combine files, not the specific objects. Kustomize is specifically designed for Kubernetes and uses knowledge of the meaning of fields to know how to combine objects. This is why you see that some key information about the deployment still needs to be repeated in the patch file. The result of applying the patch is the base file where replicas is modified from 1 to 6. This is really inheritance. File overlays are just another form of override inheritance. In that sense, it’s no different from the inheritance that is used in languages like JSON and GCL. A problem with inheritance is that it may be very hard to figure out where values are coming from. As long as you keep it limited to one level, things are usually ok.

So far, so good. Now let’s look at another example. Let’s assume that the SRE team introduced a requirement that all production services have Prometheus scraping enabled. This could be a requirement for health checks as well, for example, which might make more sense. This is a very simple example. For the sake of simplicity, let’s stick with this. Someone in a team enforced this requirement in the base deployment so that it will be enabled in every environment automatically. You could also imagine that there was another base layer or another layer that provides the default for all deployments within a group or setup, not just for frontends. Now later, somebody else in a team explicitly turns Prometheus scraping off in the prod file. As before, this can be done by overriding the value like this. As we said before, this was a requirement. Clearly, this should not be allowed. Under what circumstances could you expect this to happen? Somebody could have added this to debug something and forgot to take it out. We all have fiddled with configurations or code for that matter to try to get things working, so this is not too unthinkable. Another reason could be that a user was simply relying on a tooling to catch errors. As you see, there’s no formal way to distinguish between a default value or a requirement. This can easily be overlooked. Another more sneaky way how this could happen is that somebody already put this Prometheus scrape false in before it became a requirement. This could happen, for example, when somebody turned it on and then turned it off, or maybe there was a default just before without it being a requirement. Then, it was made a requirement later down the line. You can imagine how this failure will go unnoticed then. Really, what’s missing here is a formal way of enforcing the soundness of a configuration.

What Else May We Want to Enforce?

We have shown a simple example, but the types of things we’re made to enforce are really myriad. For instance, one could require that images must be from a specific internal container registry. We could set a maximum number of replicas so that resource usage will not get out of hand. Containers might be required to implement the health check, or we might want to enforce particular use of labels, or implement some API limitations, check quota limits, or authorization. One could, of course, move all these enforcements into the system that consumes the configuration, so wherever we passed the configuration, and reject it if it’s incorrect. That’s certainly more secure, and we should anyway do that. There’s no way for users to override it in that case. Even if we do that, we still want to know about the failures ahead of deployment. We still want to know about these things early. Failing fast and early is always a good idea. We’ve learned the hard way that testing or validation is as important for data and configuration as it is for code. In the majority of cases where I’ve seen rigorous testing of data and configuration introduced, a whole host of errors were uncovered. What is testing in this context? You could think of unit tests, writing tests for your configuration actual code. Another approach is assertions. GCL and JSON do this, for instance. Really any contract that checks in variants would do. Any method that accomplishes that would work. One problem with both unit tests and assertions is that we’ll end up with lots of duplication. In the example above, it would mean that repeating the scraping requirement in a separate test or assertion.

CUE Crash Course

This is where CUE comes in. Let’s take a look at how CUE would solve the problem. CUE provides a data, schema, validation, and templating language that provides an alternative to override-based approaches. In the approach that CUE was based on already 30 years ago, it was recognized that there is a great overlap between types, templating, and validation. In a moment, I will return to how to use that to solve the above issue. Before I give CUE solutions to this problem, let me do a quick five-slide crash course on CUE. The CUE language is all of a data language like JSON, a schema validation language like JSON Schema, and a templating language like HCL. Let’s see how that looks like. As a data language, CUE is strictly an extension of JSON. Think of JSON with syntactic sugar. You can drop quotes around field names, in most cases. You can drop the outer curly braces and trailing commas. There are more human readable forms of numbers. There may appear to be some similarities to YAML. A big distinction with YAML though is that CUE is not sensitive to indentation, making copy and pasting of CUE a lot easier, and especially a lot error prone. CUE also allows defining schema. Here we see a schema defined in Go, and its equivalent in CUE. In CUE, schema look very much like data where the usual strings and number literals are replaced by type names. This already hints at an important concept in CUE that types are treated just like values. CUE also allows validation expressions where validations are like types, just values. Let’s take a look at this JSON Schema. Field ID is just defined as a string. Field arch is an Enum which can only be one of the two shown values. For RAM, we define that a machine in this data center must have at least 16 gigabytes of RAM. Similarly, we also require that all disks must be at least of size 1 terabyte. You can already see one benefit of treating types validation as values, namely that the resulting notation is quite compact.

CUE can also be used for templating. Take a look at this HCL, for example. Just like HCL, CUE has references and expressions, key elements of templating. CUE doesn’t have the notion of variables per se, but any field can be made a variable by adding the tag annotation. This is really a tooling feature, not a language feature. CUE has a very rich set of tooling built around it to make use of these kinds of things. If you want to put constraints on such variables, you just use the validation contracts we saw before on the value itself. More specifically for templating, CUE also supports default. Anytime you have an Enum in CUE, you can mark a default with an asterisk. Defaults are really a kind of override mechanism. It’s the only override mechanism that CUE allows. It’s CUE’s answer to boilerplate removal. It’s a very powerful construct, actually, ensuring that both the depth of inheritance stays limited while making boilerplate removal quite granular and effective. Comments are first-class citizens in the CUE API. It is common to note special case comments if you otherwise don’t need to. This is used in API generation, one of CUE’s capabilities. In summary, CUE uses the same structure for data, schema, validation, and templating. There’s a lot of overlap between these and combining them in one framework is a very powerful notion. Another important thing to know about CUE is that it can compose arbitrary pieces of configuration about the same object in an order independent way, meaning that the result is always the same, no matter in which order you combine it. This in and of itself is a key factor on how CUE gets a grip on configuration complexity. Override approaches do not have this property, even when just using file-based patches.

Demo of Redoing the Example in CUE

Now back to our example. Here we see a possible CUE layout for the same setup as above. At the top we see a CUE mod directory. Much like the Go language, CUE can treat an entire directory structure in a holistic way with predictable and default build behavior. This makes it easy for configurations to be treated dramatically within a context. This is what a module file would look like. It’s really just to class a unique identifier for the module, not unlike Go. The CUE mod directory more or less serves the same purpose as the kustomization YAML files in kustomize, and defines how to combine files. If the module file is all it takes to specify how to combine files, how does CUE know how to combine all the objects? If it’s not based on the file name, and as CUE is not Kubernetes specific, it’s also not based on the field, so how do we do that? One part of the answer is to rely on the directory structure itself. All files in a directory and parent directories within a module that are declared within the same package are automatically merged. All CUE files in prod are automatically merged with all files in base for instance, as long as they have the same package name declared. It doesn’t quite answer everything. In the kustomize setup, different files describe different objects at the same level. It does so by matching object types and names based on Kubernetes specific fields. CUE is not Kubernetes aware, so how does it know how to combine them? We said that CUE can combine configuration aspects about the same object in any order. All we need to know, really, is which configuration aspects belong to which object? We do this by assigning a unique address or path for each distinct object within a namespace. Think of it like a RESTful path. Here we see an example of such a possible path structure and a specific instance from our frontend deployment. A big advantage of declaring all objects in a single namespace is that we can define validation as well as boilerplate removal that spans multiple objects, such as automatically deriving a Kubernetes service from a deployment, for example.

How does this all look like? Let’s start with our production tailoring of the production frontend deployment. Here, the first line represents the package name I mentioned earlier, which is used to know how to combine files. This will be included in all the files that we’ll show. The second line is the adjustment which just sets the number of replicas to 6. Let’s compare it to our original kustomization example. You can see it roughly contains the same information. One could notice, though, that many of the fields have been omitted. This is possible because this information is now included in the path, and the path uniquely describes that object. The identifying fields are therefore no longer needed as they’re already specified in the base template. Let’s now take a look at the base template. The base applies both to prod and dev, which is reflected in the path. We really mean any environment here, though, so usually we write this as this using the any symbol. All the fields are mixed in automatically with prod that we saw earlier, causing the frontend deployment to be completely defined here. There’s an interesting difference to note compared to the original kustomize template. You can see that it’s quite similar in structure. One noticeable difference, though, is that we no longer set replicas to 1. This is because CUE doesn’t allow overrides, so setting it to 1 would conflict with the value of 6 in the prod file and just cause an error. We do set it to int though, to indicate that we at least expect a value. There’s really no need to set a default value here as all concrete instances already specify a replica explicitly. Also, it’s often good not to have a default value specified to force users to think about what value is really appropriate. However, if one really wishes to set a default, we can use it using the asterisk approach and using Enums as we saw before, so you can see that here.

Setting Our Scraping Requirement

We have replicated our original kustomize setup. How do we now introduce our scraping requirement? To show the flexibility of CUE, we define this rule in a separate file on the top directory named monitoring. It follows the same approach as before, but we specify a path to which the configuration aspect belongs, along with the desired tailoring. Because true is not the default value here, it just becomes a requirement imposed in the frontend job in any environment. If a user wanted to set this to false in any of the environment, it would first have to modify this file. Note that this is not unlike how this would work if you had unit tests. Also, there, you would have to change the test first to make it work. The key difference here, though, is that this rule functions as both the templates, as well as the requirement, so you don’t need to write a unit test anymore. There’s no duplication, but all the convenience and safety are there. Now, if you wanted to be a bit more lenient, and say, only require this setting for prod but not for other environments, we could write this as shown here. Here for any environment, the scraping value is defined as either the string true, which is the default, or the string false. This has the additional benefit that this validates that the value is actually either the string true or false, and that anything else is an error. For example, Boolean true or false. One could easily imagine a user would inadvertently write this as a Boolean instead of a string. This is another good example of how validation and templating overlap. Also know that the second rule also applies to prod. That’s fine. The only value that is allowed by both is true, and the second rule simply has no effect for prod. In general, a nice property of CUE is that you can determine that Prometheus scraping must be true by just looking at the first rule. No amount of other rule can ever change this, so you don’t even have to look at the production deployment file to check for this because no amount of other rule could specify would change this. Based on experience, this is actually an immensely helpful property to make configurations more readable and reliable. We could also easily generalize this rule beyond our frontend job. All we need to do again is to replace the frontend field with the any operator we saw before.

What Is CUE?

What is CUE? CUE is not just a language, but also has a rich set of tooling and APIs to enable a configuration ecosystem. It’s really not specific to any application, but rather aims to be a configuration Swiss Army Knife, allowing conversions to and from and composition of different configuration formats, refactoring configuration, and integrating configuration into workflows, pipelines, and scripting. It’s designed really with next-gen GitOps in mind. CUE itself is generic and not specific to Kubernetes. There are projects, tools, and products in the CUE ecosystem like KubeVela, a CNCF open source project that builds on top of CUE and adds domain specific logic. CUE itself is application agnostic. That said, CUE itself has some tools to make it more aware of a certain context, like Kubernetes, in this case, for instance. Let’s take a look at how that might work. All we need to do to make it more aware of Kubernetes really is to import schemas for Kubernetes defined elsewhere, and assign it to the appropriate path. For instance, here we say that all paths that correspond to a deployment are of a specific deployment type. From that moment on, all Kubernetes will be typed as expected. Now if you type a number of replicas as a string, for example, or even a fractional number, instead of an integer, CUE will report an error. Where does the schema come from? You don’t need to write it by hand, in most cases. It may almost seem a little bit like magic, but you can get it from running the shown command. How does this work? The source of truth for Kubernetes schema is Go. CUE knows how to extract schema from Go code. That’s really all there is to it. In the example we showed, we had a single configuration that spanned an entire directory tree. Modules and packages could also be used to break the configuration up in different parts linked by imports. This gives really a lot of flexibility on how you want to organize things with CUE.

What Really Causes Outages Nowadays?

We’ve seen some of the benefits of validating configuration. It’s really older as to it to preventing outages. Really far from it. Earlier, we mentioned that research shows that for many companies, more than 50% of the outages are related to configuration. This concurs with my experience. This is really caused by this simple validation related rules of a single system that we showed before. Indeed, I’ve seen many outages actually related to such a failure. The more mature a company becomes, the less likely that will be the case. On the other hand, the more a company matures, configuration also tends to grow in complexity. As a result, this 50% figure seems to hold up over time, even as the simple cases get nearly eliminated completely. A clue of this is shown by this outage reported by Google. I would indeed classify this as a configuration related outage, just not of the simple kind that we addressed before. There are a handful of very common patterns that one can observe from configuration related outages. One of them is if an application defines a configuration that’s valid in principle, but violates some more specific rules or policy of a system to which its configuration is communicated upstream. If these specific rules are not known and tested against at the time of deployment, a launch of such a system can fail in production, and often unnecessarily so. This is a case of not failing early due to a lack of sharing. You want to fail early, as we mentioned earlier.

This is one of the things that went wrong at a correlation. Correlation configuration and validation rules or policy, and using it pre-deployment can greatly help in these cases. How does one do that? What is configuration even really? To answer this, let’s see where configuration lives in this very simple service. Here we have a single Pong Service that listens to ping requests and replies with pong if the request is authorized. It also logs requests to a Spanner database. The low-level infrastructure is set up by Terraform, in this case, and authorization requests are checked by OPA. Can you spot the configuration? Let’s see. The most obvious one, perhaps, since this track focuses on infrastructure as code, is the Terraform configuration. In this case, it’s used to deploy the VM and the Spanner database. Also, our server operates based on settings. Here we show a JSON configuration file, but really command line flags and environment variables are all configuration artifacts. Thirdly, we have a schema definition of the database. Why do we call this configuration? Really, database table definition is also configuration. You can see here that the database tables really can be a combination of schema and constraints or validation. In other words, the database schema defines a contract of how the database can be used. This is really configuration in our view. As a litmus test, you can see the translation of the schema on CUE on the right-hand side. You see that it’s mostly a schema, but has some validation rules associated with it as well.

Let’s continue with search for configuration. We already mentioned that the Pong Server needs to be configured. Also, data types within the code that are related to communication with other components can be seen as configuration. Let’s look at one of these types here, audits, for example. You can see there’s redundancy with the database schema defined earlier. It’s basically the same schema, but in Go, it drops many of the constraints that were defined in the database schema before. It only partially encodes the contract with the database. None of these constraints are really included in the Go code, so this can result in runtime errors that could have been prevented pre-deployment. This is a nice example of that. It’s like using a dynamically typed language without unit tests.

You will only discover such errors when things are running. Let’s continue our search. Our Pong Service is friendly enough to publish an OPI spec of its interface. Really, also, this is configuration. It does overlap with other parts of the system, for instance, regarding what types of requests are allowed. We’re not done yet. We haven’t touched our authorization server yet. Aside from the configuration that is needed to launch that server, also the policies that it executes and checks are configurations as well.

Let’s take a look. Here we have a very simple Rego policy that specifies only Get methods are allowed. This is not a restriction of the system per se, but rather an additional restriction enforced by this policy. As this is a static policy, there’s really no reason not to include this restriction in the OpenAPI published by the Pong Server. Indeed, it does. The problem is, though, that in the current setup, it is maintained manually. This is error prone. On the right-hand side, you see a possible equivalent of the Rego on the left-hand side in CUE. We’ve taken a bit of a different approach here with CUE. We could have used a Boolean check, but we don’t. We’re making use of the fact that CUE is a constraint-based language here. Rather than defining allow as a Boolean, we compose it with the input, where a successful composition means allow, and a failure means deny. Here, it doesn’t make much of a difference. For larger policies, specifying the policy in terms of constraint this way tends to be quite compact and readable when done in CUE. We see an important role for CUE in policy for this reason.

The CUE Project

As we have hopefully shown, configuration is everywhere. Most of you will even carry some in your pocket, like your settings on your smartphone are configuration. We can see a lot of overlap and redundancy in the configuration of the Pong Server. This is a small server, but things really don’t get any better for larger services. Can we address that with CUE? The CUE approach is to consolidate all configuration, removing all redundancy and generating everything necessary from a single source of truth. Using CUE like this ensures that all contracts are known throughout the system as much as possible. Eliminating redundancy also increases consistency. All this helps to fail early and prevent outages. This tweet from Kelsey captures nicely what is going on. We need clear contracts between components and we need visibility of contracts, configuration, and state even throughout pipelines. What we’ve also shown is that this is not exclusive to infrastructure, this goes beyond infrastructure.

Here’s another quote from the Google Cloud website. It specifically emphasizes that contracts are often lost in code, dealing with configuration calls for a declarative approach, really. This is exactly what CUE is about. We talked a bit about what CUE is, but let me share a bit where the CUE project is at. A key part of CUE is spinning down to the precise meaning of configuration. This allows it to define adapters for accurate representations of a number of formats. The ability to morph any configuration into different formats really makes CUE great for GitHub style deployment. The set of adapters is certainly not complete, as you can see here, but things are moving pretty fast. A lot can really already be done with what exists already. For example, CUE’s own CI runs on GitHub Actions. The way we do that is we define our workflows in CUE, making use of templating and other CUE features. Then we import a publicly available JSON Schema definitions for GitHub workflows to validate these workflows. We then export YAML and feed it to GitHub. We also have a lot of users already. Here’s a small and by no means complete selection of companies, projects, and products. Some of these are using CUE as a basis for systems of tools they are building, not unlike how we saw on the demo. Some of these are actually exposing CUE to their users as a frontend, or are leveraging the composable nature of CUE as well as the rich CUE toolset that is available. We’re just getting started. Aside from me, we have Paul, Roger, Daniel, and Aram who all have strong backgrounds in the Go community, working on CUE development. Carmen also ex-Google and ex-Go team oversees a redo of our documentation and learning experience, as well as user research, among other things. Dominik is responsible for project ops and also user research.

Conclusion

To reach five nines reliability, we need to get a handle on configuration. We believe this is done by taking a holistic approach to configuration. This is not an easy task, by all means, but this is the goal we’ve set ourselves out to achieve. Configuration has become the number one complexity problem to solve in infrastructure. We need a holistic approach that goes beyond just configuration languages used. We need tooling, API, and adapters that enables an ecosystem of composable and interoperable solutions. We believe that CUE will be able to support such a rich configuration ecosystem, and that will reduce outages, increase developer productivity, while making it delightful.

Questions and Answers

Andoh: Why don’t you just use a general-purpose programming language or configuration?

Van Lohuizen: The general structure of configuration, especially as it gets a bit larger, and it’s actually already quite quickly, is that you have a lot of regularity in all the variations of configuration, but there’s a lot of irregular exceptions within it. As long as you don’t have that, as long as you have a lot of regularity, then it’s quite easy to write a few for loops and generate all the variations of this configuration. If you don’t have that, then expressing that in code is very verbose and very hard to maintain. Whereas if you have a more declarative, like logic programming-based approaches actually becomes much more manageable, in that case. That’s really the main reason. You could say that up to medium scale, programming languages work fine, but it’s really for the larger configuration where it really starts to break down, generally speaking.

Andoh: How does CUE enable better testing and validation?

Van Lohuizen: Of course, CUE is a constraint-based language, so it’s fairly easy to define basically, assertions, in terms of CUE and restrictions on your data in CUE. Really, a key part of what makes it so powerful, though, is that the same mechanism you use for testing and validation, you can also use for templating and generation. For instance, suppose you have a field that says the number of replicas should always be 10. You can use it as a templating feature. If you don’t specify, the replicas is 10 is automatically inserted in your configuration. You can also see that that’s validation. If the user specifies 10, it’s fine, but if the user specifies 9, these two things clash. Validation and templating are really two sides of the same coin. This makes things a lot easier. Yext is a company that has done this, for example. Just as I mentioned, that configuration is where you need to create a lot of small variations in setups, this is often also the case with test sets. What we’ve seen people do is that they actually use CUE itself to generate test sets, that you can use and test a whole variety of cases. CUE as a language does not only make it easier to test, but it actually also makes it easier just like configuration to generate test data.

I also think using programming language for config is also a big temptation to start embedding complex logic into config files. I think that’s not a good pattern, and programming languages may give you too much rope to hang yourself.

That’s absolutely true. This was one of the design points for GCL in the early days, that basically a configuration language should really not do computation. If you really need any computation at the configuration layer, you should shove it to the server. Especially back in the days at Google, we could do that because we had control over the entire internal ecosystem. Even there in practice, that would actually fail, because even though it’s one company controlling all of that, there are still different teams. If one team wants to configure it in some different way, the other team that’s controlling the binary might not just want to add that logic into their binary. Plus, there’s different release schedules, and for all kinds of practical reasons that’s not the case. It’s inevitable that you will have some computation at the configuration layer. What you see often happening is that these DSLs then involved in ultimately, basically, general-purpose programming language, which, of course, are very hard to use, and it becomes a complete mess. The way we try to fix that in CUE is the composable features and nature of CUE also allows you to combine externally computed data into CUE. CUE has the scripting layer where you can basically alternate between CUE evaluation and shelled out computation. What that allows you to do is to basically take configuration in CUE, get some values, shell it out to some other computation, like some binary or something else. We’re working on a Wasm extension as well. Then take that data and insert it again in the declarative configuration. That does make it a little bit impure. At least what it allows you to do is to truly separate the computation that needs to be in the configuration layer, use a general-purpose language for that, unit test it. Then have anything that can be modified quickly and easily to what’s easy to read and can be expressed in data, you can keep in CUE itself. You can compare it a little bit to spreadsheets. I often say that CUE is a spreadsheet for data, so mostly you are just specifying numbers, you can specify some validation rules. If you really need to do some computation, you use these functions in Google Sheets, or Excel, or whatever that you can program in Visual Basic, or JavaScript, or what have you. Really, you keep the code separate from the configuration. I think that’s a good compromise for these cases where the computation really needs to live in the configuration layer.

Andoh: Another thing that you said when I asked about why not just a general-purpose programming language was that for after a certain scale, you want a configuration language. Does that mean that CUE is really only best for large scale configurations? What about small?

Van Lohuizen: Even though it’s designed for very large scale, and the experience is with extremely large configurations, we need to do some performance improvement also to make that work in the general case. We also recognize that, generally speaking, configurations start very small. It should always be the goal to keep configuration small. We wanted to have something very simple that really already works, from the very beginning. Think of it a little bit like in Go, for example, is the language that you can use for very simple programs. It’s for quick development, but actually, it scales fairly well for larger systems. Some of the design principles there is to really make it data driven, and make it look like JSON, so to make it look as familiar as possible. Then if you know how to write JSON, you can already start using CUE, and then you can start using syntactic sugar and grow into the language. That was a big part of it. Also, really the scale at which we are seeing these problems, even though it works at very large scale, you can often see it coming even with fairly small configurations already. If we’re talking about hundreds of lines, you can already see these problems occurring. Sure, with tens of thousands or hundreds of thousands of lines, you’re almost guaranteed to get into it. It can happen at smaller scales, too.

Andoh: In one of your slides, we saw four different tools being used to be able to do all the work of CUE, and I think I looked at the logos and they were JSON Schema, and OpenAPI, and YAML, and JSON. Then you talked about how CUE can also do schemas, validation, and things like that. Since this is a beyond infrastructure, can you talk about what CUE can do beyond infrastructure.

Van Lohuizen: We’ve had some really unexpected use cases for CUE. Also, people coming to us, first of all, like, do you know you’re solving the composable workflow problem? People ask this question. You see that many of the uses of CUE are actually going into that direction, whereas really like full CI/CD pipelines and things like that are being defined in CUE. The same thing was for artificial intelligence and ML pipelines, like, how do you compose the results? How do you set it up? Also, very similar problem if you think about it. Also, just lower-level data validation. We have companies that are managing their documentation in CUE, which if you think about it, it’s also a configuration problem. We’re seeing it branching out in all these different levels. That’s more horizontal, to some extent. Also, if you look at the layering of configuration, like not just date and type of thing, but also policy. It’s quite hard to specify policies well. I think it can only be done in logic programming, like formulas. That’s why you see the success of Rego, and all these sorts of things. They’re all based on a principle. CUE actually was designed as a reaction to Prolog and Datalog like approaches, which are not very easy to understand for a lot of people. The biggest users were not software engineers, originally writing CUE’s predecessors. We also think that this is quite a good tool. If you have to use logic programming, this is quite an approachable thing to start going into the policy realm, and all these things. That’s why we’ve seen that much demand. It’s really nice to have one tool and one way of specifying all these different things. We think that is quite useful.

Andoh: Where can you learn more and get involved in CUE and the CUE community?

Van Lohuizen: There’s a website called cuelang.org. You can find links to the community. We have a very active Slack community. There’s also GitHub discussions for more Stack Overflow like questions that will just stay there, and where people can get help. That’s a good place to start. We’re working on new documentation that might make it a little bit easier to read. Some of the documentation or most of the documentation we have was really more written for the language designers, and not yet to get people started. We’re working on getting that going. We think CUE is quite simple, actually, but if you read the documentation out, it might not look like that.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Exceeds Earnings Expectations and Shows Strong Growth Trajectory

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, a leading technology company listed on NASDAQ (MDB), recently provided an update on its third quarter 2024 earnings guidance. The company expects its earnings per share for the period to be in the range of $0.47 to $0.50, which is significantly higher than the consensus estimate of $0.27. Additionally, MongoDB issued revenue guidance of $400 million to $404 million, surpassing the consensus revenue estimate of $389.12 million. These positive figures indicate strong performance for the company.

On September 4, 2023, the stock opened at $381.30, reflecting investors’ confidence in MongoDB’s prospects. Over the past twelve months, the stock has demonstrated a substantial growth trajectory with a low of $135.15 and a high of $439.00. The market capitalization of MongoDB stands at an impressive $26.91 billion, highlighting its position as a major player in the industry.

When analyzing financial indicators, it is worth noting that MongoDB has a price-to-earnings ratio of -81.65 and a beta value of 1.13. These metrics provide an insight into the valuation and volatility of the stock respectively. Furthermore, MongoDB boasts favorable current and quick ratios of 4.19 each, indicating its strong liquidity position. The debt-to-equity ratio also stands at 1.44, suggesting prudent financial management by the company.

In terms of investor sentiment and involvement, several institutional investors have made recent changes to their positions in MongoDB’s stock. KB Financial Partners LLC acquired a new position in the company during the second quarter while Bessemer Group Inc., Clear Street Markets LLC, Parkside Financial Bank & Trust, and Coppell Advisory Solutions LLC also expanded their holdings in different quarters throughout this year.

Various brokerages have provided their insights on MDB’s performance as well as recommendations for potential investors. Citigroup increased its price target from $430.00 to $455.00, endorsing a buy rating for the company. Barclays, Oppenheimer, Capital One Financial, and Needham & Company LLC also raised their price targets and ratings. Overall, the consensus rating for MongoDB is Moderate Buy, with an average price target of $379.23.

In conclusion, MongoDB’s recent update on its third quarter 2024 earnings guidance indicates a positive outlook for the company. With significantly higher earnings per share and revenue estimates compared to consensus estimates, MongoDB demonstrates its strong performance in the market. The stock has shown considerable growth over the past year and has garnered attention from institutional investors. Analysts’ ratings further support this bullish sentiment surrounding MongoDB. As always, investors should conduct their own research and analysis before making any investment decisions.

MongoDB, Inc.

MDB

Buy

Updated on: 05/09/2023

Price Target

Current $395.90

Concensus $388.06


Low $180.00

Median $406.50

High $630.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Miller Jump
Truist Financial
Buy
Mike Cikos
Needham
Buy
Rishi Jaluria
RBC Capital
Sell
Ittai Kidron
Oppenheimer
Sell
Matthew Broome
Mizuho Securities
Sell

Show more

MongoDB Exceeds Expectations with Strong Earnings but Faces Challenges in Profitability and Insider Sales


MongoDB (NASDAQ:MDB), a leading provider of database software, recently released its quarterly earnings data for the period ending on June 1st. The company exceeded expectations with an impressive earnings per share (EPS) of $0.56, surpassing the consensus estimate of $0.18 by a significant margin of $0.38. Additionally, MongoDB reported revenue of $368.28 million for the quarter, outperforming the consensus estimate of $347.77 million.

Moreover, MongoDB’s revenue showed substantial growth of 29.0% on a year-over-year basis, indicating positive momentum for the company in its industry sector. This is promising news for investors and industry analysts who closely follow the performance of companies like MongoDB.

However, despite these positive financial results, it is noteworthy to mention that MongoDB still faces challenges in terms of its net margin and return on equity. The company recorded a negative net margin of 23.58% and a negative return on equity of 43.25%. Such figures signal areas where MongoDB needs to improve its profitability and efficiency in order to maximize shareholder value.

Looking ahead, equities research analysts anticipate that MongoDB will post -2.8 earnings per share for the current fiscal year, showcasing caution and skepticism about the company’s future prospects in terms of profitability.

Institutional investors have shown interest in investing in MongoDB amidst these shifting dynamics within the company’s financial performance indicators. KB Financial Partners LLC recently acquired a new position in MongoDB worth approximately $27,000 during the second quarter. Similarly, Bessemer Group Inc., Clear Street Markets LLC, Parkside Financial Bank & Trust, and Coppell Advisory Solutions LLC all made notable changes to their positions in MongoDB stock.

Another crucial aspect to consider when evaluating an investment opportunity is insider activity within a company. In recent news regarding MongoDB, Chief Accounting Officer Thomas Bull sold 516 shares at an average price of $406.78, resulting in a total value of $209,898.48. Following this sale, Bull now possesses 17,190 shares worth approximately $6,992,548.20.

Furthermore, CFO Michael Lawrence Gordon sold 2,197 shares of MongoDB stock at an average price of $406.79, translating to a total transaction value of $893,717.63. After the sale, Gordon now owns 101,509 shares valued at approximately $41,292,846.11.

Overall, these insider sales indicate that certain executives within MongoDB have opted to reduce their holdings in the company’s stock. It is essential to take note of insider activity as it may provide insights into the confidence and belief held by key figures within the organization.

In summary, MongoDB’s recent quarterly earnings report showcased impressive EPS results and revenue growth. However, challenges still persist with negative net margin and return on equity figures. The company’s future profitability remains uncertain as analysts express skepticism about its performance for the current fiscal year. Nonetheless, institutional investors have shown interest in MongoDB stock while some insiders have chosen to decrease their positions in the company. These factors create a complex environment for potential investors to navigate when considering whether or not to invest in MongoDB’s stock.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Royal Bank of Canada Reiterates ‘Outperform’ Rating for MongoDB Stock with Positive Price …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

On September 4, 2023, investment analysts at Royal Bank of Canada reiterated their “outperform” rating for MongoDB (NASDAQ:MDB) stock in a note sent to investors. According to Benzinga, they also set a price objective of $445.00 for the stock, suggesting a potential upside of 13.27% from the company’s previous close.

During Friday’s trading session, MDB traded up $11.58 to reach $392.88. A total of 5,887,726 shares were exchanged, compared to an average volume of 1,709,101 shares. Over the past year, MongoDB has seen a range in its stock price from a low of $135.15 to a high of $439.00. Currently, the stock has been hovering around its fifty-day moving average of $389.93 and its two-hundred day moving average of $302.86.

With regards to financials and market metrics, the company has a quick ratio and current ratio both standing at 4.19. This indicates that MongoDB has sufficient short-term assets to cover its immediate liabilities comfortably as well as meet any contingencies that may arise in the near term.

Moreover, MongoDB carries a debt-to-equity ratio of 1.44, reflecting some level of leverage employed by the organization in funding its operations. It boasts a market capitalization value amounting to $27.73 billion and operates with a price-to-earnings ratio of -84.13 against negative earnings per share figures.

Taking into account institutional investors’ activities in relation to MDB stock holdings, there have been recent changes in their positions within the company’s shares. Raymond James & Associates increased their position by 32%, while PNC Financial Services Group Inc., MetLife Investment Management LLC, Panagora Asset Management Inc., and Vontobel Holding Ltd., among others, made adjustments during the first quarter. In aggregate, hedge funds and other institutional investors currently own 88.89% of MongoDB’s stock.

Regarding the company’s financial performance, MongoDB last reported its earnings results on June 1st. For the quarter, it posted earnings per share (EPS) of $0.56, surpassing the consensus estimate of $0.18 by $0.38. The business also generated a revenue of $368.28 million during the quarter, exceeding analysts’ expectations of $347.77 million. However, MongoDB had negative return on equity (ROE) and net margin figures at -43.25% and -23.58%, respectively.

Despite these negative figures, the company’s revenue showed a positive growth rate of 29% year-over-year for that period compared to a decline in earnings in the previous year’s corresponding period when they reported earnings per share of ($1.15). Research analysts anticipate that MongoDB will post earnings per share of -2.8 for the current fiscal year.

In conclusion, MongoDB has recently received an “outperform” rating from Royal Bank of Canada with a price objective indicating potential upside from its previous closing price. While the company faces certain challenges related to negative profitability metrics, it has demonstrated strong revenue growth and continues to attract institutional investor interest within its stock holdings.

MongoDB, Inc.

MDB

Buy

Updated on: 04/09/2023

Price Target

Current $392.88

Concensus $388.06


Low $180.00

Median $406.50

High $630.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Miller Jump
Truist Financial
Buy
Mike Cikos
Needham
Buy
Rishi Jaluria
RBC Capital
Sell
Ittai Kidron
Oppenheimer
Sell
Matthew Broome
Mizuho Securities
Sell

Show more

Research Reports, Analyst Recommendations, and Insider Sales: Assessing MongoDB’s Potential


MongoDB, Inc. (MDB), a leading modern, general-purpose database platform, has recently been the subject of several research reports and analyst recommendations. Such reports play a crucial role in providing insights into the company’s performance and growth potential for potential investors.

Piper Sandler, a renowned global investment bank and asset management firm, raised its target price on MongoDB from $400.00 to $425.00. In their research note released on Friday, Piper Sandler also provided the company with an “overweight” rating, indicating that they believe it will outperform the market.

Similarly, Needham & Company LLC, an investment banking firm specializing in technology research, increased their price objective for MongoDB shares from $430.00 to $445.00. They also issued a “buy” rating for the stock as they assessed its growth prospects.

Truist Financial, another prominent financial services company, boosted its price target on MongoDB shares from $420.00 to $430 while maintaining their “buy” rating. This positive outlook from Truist Financial aligns with other optimistic recommendations surrounding MongoDB’s future performance.

Macquarie, a leading global investment banking and financial services group headquartered in Australia, raised their target price on MongoDB shares from $434.00 to $456.00 in their report released earlier this month.

Tigress Financial impressed with MongoDB’s performance even more significantly by raising their price target from $365.00 to an impressive $490.00 just two months ago on June 28th.

These positive analyst ratings and upward revisions of price targets indicate growing confidence in the future trajectory of MongoDB’s business operations and market position.

It is worth mentioning that despite these favorable opinions exhibited by most analysts, one equity research analyst issued a “sell” rating for the stock. However, given the consensus among multiple analysts who have assigned “hold” or “buy” ratings to the company—twenty being the number of analysts with “buy” ratings—the overall sentiment remains positive.

According to data from Bloomberg, MongoDB currently holds a consensus rating of “Moderate Buy,” reflecting the varying assessments of experts in the field. The average price target for MongoDB stands at $405.35, further emphasizing analysts’ confidence in its growth potential.

Shifting the focus slightly, recent transactions involving prominent company insiders have also captured attention. Director Dwight A. Merriman reportedly sold 1,000 shares of MongoDB stock on Tuesday, July 18th, at an average price of $420.00. The total value of this transaction amounted to $420,000.00. As a result of this sale, Merriman now holds 1,213,159 shares directly in the company’s stock with an estimated value of $509,526,780.

In addition to Merriman’s sale, CRO Cedric Pech sold 360 shares of MongoDB stock on Monday, July 3rd, at an average price of $406.79. The total value generated from this transaction equaled $146,444.40. Following this sale, Pech now possesses 37,156 shares in the company worth approximately $15,114,689.24.

These insider transactions demonstrate significant sales activity within the company over the past ninety days amounting to 76,551 shares and totaling approximately $31,143,942 in value. However,it is important to note that these sales account for only about 4.80% ownership by insiders in regards to the company’s stock.

While researching opportunities in the financial market and evaluating investment options requires careful consideration and analysis based on individual risk tolerance and other related factors—these research reports and insider transactions can provide valuable insights into MongoDB’s current outlook and future prospects.
Overall,the combination of positive analyst ratings,sizable price target revisions,and notable insider sales activity showcases both cautious optimism and ongoing interest surrounding MongoDB,Inc.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Expands Duet AI in Google Cloud for App Development, DevOps, and More

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

At Google Cloud Next, Google has expanded its always-on collaborator Duet AI with new features aimed at helping developers with application development, DevOps, database management and migration, data analysis and visualization, as well as cybersecurity.

Duet AI in Google Cloud provides expert assistance across your entire software development lifecycle. This includes code generation, source citation, test coverage, designing and publishing APIs, migrating and modernizing applications, and much more.

Duet AI now supports code refactoring with the aim to help developers modernize their legacy applications. This is normally an expensive task that Duet AI makes so easy as formulating a prompt in natural language, says Google. For example, a developer could use Duet AI to migrate an existing application from C++ to Go and replace its database with Google Cloud SQL. This can be accomplished using a simple prompt like: “Convert this function to Go and use Cloud SQL”.

Another new feature in Duet AI is context-aware code generation, which leverages knowledge about a company’s codebase and libraries to generate specific code suggestions. This means, for example, that the generated code could use the company’s classes and methods as found in their codebase.

For DevOps, Duet AI provides new capabilities to operate and manage infrastructure by helping automating deployments, enforcing correct configuration, and helping understand and debug issues.

Duet AI is also integrated with BigQuery to provide contextual assistance to write SQL and Python code to access and analyze data. Additionally, it makes it possible to generate vector embeddings in BigQuery to build semantic searches and recommendation queries.

Besides BigQuery, Duet AI can work with relational databases such as Cloud Spanner, AlloyDB, and Cloud SQL and generate code to structure, modify, or query data based on a prompt in natural language. Furthermore, it can drive Google Database Migration Service to help automate code conversion for cases such as stored procedures, functions, triggers, packages, and custom PL/SQL code.

As a final note about Duet AI new features, it is now possible to summarize and classify vulnerability information and provide suggestions as to how to remediate security issues.

Duet AI is available in Google Cloud console, Cloud Workstations and Cloud Shell Editor, as well as in external IDEs through Cloud Code IDE extensions. Supported IDEs include VSCode, CLion, GoLand, IntelliJ, PyCharm, Rider, and WebStorm.

While Duet AI is still in preview, Google said its general availability will come later this year.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


$100 Invested In MongoDB 5 Years Ago Would Be Worth This Much Today – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB MDB has outperformed the market over the past 5 years by 28.07% on an annualized basis producing an average annual return of 37.5%. Currently, MongoDB has a market capitalization of $27.73 billion.

Buying $100 In MDB: If an investor had bought $100 of MDB stock 5 years ago, it would be worth $499.53 today based on a price of $392.88 for MDB at the time of writing.

MongoDB’s Performance Over Last 5 Years

Finally — what’s the point of all this? The key insight to take from this article is to note how much of a difference compounded returns can make in your cash growth over a period of time.

This article was generated by Benzinga’s automated content engine and reviewed by an editor.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB 7 Adds Queryable Encryption – I Programmer

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

MongoDB 7 has been released with full support for encrypted queries and easier management of Atlas Search indexes.

MongoDB is a NoSQL document database that stores its documents in a JSON-like format with schema. MongoDB Atlas is the fully-managed cloud database from the MongoDB team.

mongodblogo

This release adds full support for queryable encryption, which was first added in MongoDB 6 as a preview feature. Queryable Encryption adds an encrypted search scheme that can be used to search encrypted data without impacting query speed or app performance. The data remains encrypted all times on the database, including in memory and in the CPU; keys never leave the application and cannot be accessed by the database server.

The next change makes it easier to manage Atlas Search indexes. You can now manage Atlas Search indexes with mongosh methods and database commands. Atlas Search index commands are only available for deployments hosted on MongoDB Atlas, and require an Atlas cluster tier of at least M10.

Performance has also been improved, especially when working with time series data. The work on this release has improved storage optimization and compression, as well as improved query performance. In addition, change streams will now support handling changes in large documents, even with pre-images and post-images, without causing unexpected errors.

The MongoDB team says developers should have greater flexibility partially through improvements to aggregation including compound wildcard indexes, approximate percentiles, and bitwise operators. There’s also new support for using user role variables within aggregation pipelines. This can be used to set up single view to display different data based on the logged-in users’ permissions.

Finally, you now get fine-grained updates and deletes in time-series collections, and new metrics to help select a shard key to help reduce developer effort, have also been added to the development process.

MongoDB 7.0 is available now.

mongodblogo

More Information

MongoDB Website

Related Articles

MongoDB 6 Adds Encrypted Query Support

MongoDB 5 Adds Live Resharding

MongoDB Trends

MongoDB Atlas Adds MultiCloud Cluster Support

MongoDB Adds GraphQL Support

MongoDB Improves Query Language

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner

Summer SALE Kindle 9.99 Paperback $10 off!!

esp32book

Comments

or email your comment to: comments@i-programmer.info

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.