Rider 2023.3: AI Asistant, .NET 8 Support, C# 12 and F# 8 Features, Debugging Improvements and More

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

JetBrains has released Rider 2023.3, the latest version of their cross-platform .NET IDE. This release contains an AI Assistant, support for .NET 8 SDK and an extended list of C# 12 and F# 8 features. There are also improvements regarding debugging, running multiple projects, UI/UX and Unity. 

AI Assistant, now out of technical preview, includes context-aware AI chat, multiline code completion, in-editor code and documentation generation, unit test creation, a Diff view for suggested refactorings, and the option to create a personalised prompt library. These updates contribute to a more intelligent and efficient development experience. Rider users can access the AI Assistant by subscribing to JetBrains AI.

Rider 2023.3 introduces official support for the .NET 8 SDK, featuring updated project templates and the capability to create, run, and debug projects for the new SDK. The release introduces new C# 12 language features such as base constructors, interceptors, and alias directives. It also adds support for F# 8 language features, including abbreviated lambda expressions, nested record updates, static interface member elements, let bindings and more. Other enhancements cover support for @ variables, Identity API endpoints, and the introduction of a cross-platform hot reload feature.

In Rider 2023.3, the “Run Multiple Projects” feature appeared. This enables the creation of personalised multi-launch configurations to manage solution dependencies. This configuration simplifies the controlled initiation of multiple projects, integrating diverse run configurations and additional tasks like solution building and publishing. The new multi-launch configuration automatically becomes the default option in the toolbar selector, offering an easy way for execution or debugging as needed.


Run Multiple Projects (Source: JetBrains blog)

Another feature, type dependency diagrams in JetBrains Rider, visually illustrate codebase interactions for improved project design comprehension and debugging clarity. Specifically applicable to C# and Visual Basic projects, this tool enables the visual study of type dependencies by incorporating various types from different projects or assemblies into the diagram, facilitating the examination of diverse dependencies.


Type dependency diagrams (Source: JetBrains blog)

Additionally, Rider 2023.3 provides predictive debugging. This feature anticipates potential code issues without execution. The update also introduces a “Modules view”, enabling inspection of dynamic link libraries (DLLs) and executables used by the application. 

In the comments on the official release article, a community member asked if there are any plans to add predictive debugging for Unity projects. Sasha Ivanova, a marketing content writer in .NET tools at JetBrains, replied that the team do not have any immediate plans for it, but they can include it in their plans if there is enough demand.

This release also contains changes related to UI/UX, Unity and a new security inspection feature designed to make published vulnerabilities more apparent. Other features can be checked on the official JetBrains website. The entire changelog of this release is available on YouTrack.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Relational vs NoSQL Cloud Databases: Pros and Cons – TechRepublic

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The profound and sustained rise of NoSQL cloud databases, like Amazon DynamoDB, MongoDB and Apache Cassandra, marks a significant change in how organizations manage vast and varied datasets. There’s nothing wrong with the traditional relational database management system. In fact, many NoSQL databases have added support for SQL-style queries.

But in a world where enterprises are deluged by unstructured data from mobile, social, cloud, sensors and other sources, NoSQL is simply better. And compared to an RDBMS, it’s better at managing massive amounts of unstructured data, horizontal flexibility and schema flexibility.

That said, NoSQL databases are more flexible when it comes to data organization and are easier to use when dealing with unstructured data. Therefore, choosing between relational and NoSQL cloud databases will come down to your needs in terms of schema structure (type of data), data organization, scaling needs and other factors that we examine below.

Relational database vs. NoSQL database: Comparison table

Database type Schema structure Scaling approach Data organization Transaction properties Ease of use
(Unstructured data)
Relational Pre-defined Vertical Structured ACID Compliant Moderate
NoSQL Schema-less Horizontal Flexible BASE Compliant Easy

NoSQL database

A NoSQL database is designed for high operational speed and flexibility in the types of data stored and how it is structured. They are primarily used for large sets of distributed data and are particularly effective when dealing with big data and real-time applications.

SEE: Non-relational databases find an audience in the rising database market.

Unlike relational databases, NoSQL databases are specifically built to handle rapidly changing unstructured data, making them ideal for organizations dealing with dynamic and varied data formats.

Pros

  • Scalability: NoSQL databases’ ability to scale horizontally makes them ideal for handing vast amounts of data across.
  • Data modeling flexibility: Because they are schema-less, NoSQL databases are ideal for various types of data formats, such as document stores, key-value stores, graph databases and more.
  • High availability: Designed for distributed environments, NoSQL databases offer robust solutions for maintaining high availability — critical for continuous operations.
  • Ease of use: In terms of managing unstructured data that doesn’t fall within the rigid structure of relational databases, NoSQL databases are more user-friendly.
  • Performance with unstructured data: NoSQL databases are highly adept at managing unstructured and semi-structured data, offering high performance in different scenarios.
  • Cost-effective at scale: NoSQL databases tend to be more cost-effective, especially in cloud environments, when compared to traditional relational databases.

Cons

  • Learning curve due to lack of standardization: Unlike relational databases that use the unified SQL, NoSQL databases have distinct and unique query languages required for database professionals to learn and understand.
  • Complexity in data consistency: Due to the distributed nature and eventual consistency model of NoSQL, achieving data consistency is often more complex.
  • Limited transactional support: NoSQL databases often do not provide full atomicity, consistency, isolation and durability transaction support, which can be a limitation for some applications.
  • Challenges with backup and recovery: The distributed architecture of NoSQL databases can complicate backup and recovery processes, requiring more sophisticated strategies compared to RDBMS.

Relational database

Relational databases have been around for much longer. Unlike NoSQL databases, they store and provide access to data points that are related to one another. RDBMSs are built on a model that uses a structure of tables linked by defined relationships expressing dependencies between the data.

PREMIUM: Finding the right database administrator is key to building effective databases.

Primarily, relational databases are used for data storage and retrieval operations in applications where data accuracy, consistency and integrity are paramount. They are the backbone of a wide array of business applications.

Pros

  • Strong consistency: Relational databases are known for their strong consistency models. They are reliable and have predictable data transactions — a critical requirement for many business applications.
  • Structured data integrity: Relational databases excel at maintaining the integrity of structured data, with a well-defined schema that enforces data types and relationships.
  • Mature and standardized: RDBMS technologies are mature with established standards, notably SQL.
  • Robust transactional support: Relational databases offer robust support for ACID transactions, which is vital for applications that require high levels of data accuracy and reliability.
  • Advanced security features: RDBMSs often come with advanced security features and access controls.
  • Comprehensive tooling and support: Due to their long-standing presence in the market, relational databases have a wide range of tools, extensive documentation and strong community and vendor support.

Cons

  • Scalability challenges: Scaling a relational database typically requires vertical scaling — adding more powerful hardware — which is costly and has its limits.
  • Rigid schema design: The predefined schema of an RDBMS can make it less flexible in accommodating changes in data structure. Significant effort is needed to modify existing schemas.
  • Performance issues with large data volumes: RDBMSs can face performance bottlenecks when dealing with very large volumes of data or high-velocity data, such as that found in big data applications.
  • Complexity in handling unstructured data: Relational databases are not inherently designed to handle unstructured or semi-structured data.
  • Cost- and resource-intensive: Maintaining and scaling an RDBMS can be resource-intensive and costly, especially for large databases requiring high-performance hardware.

Choosing between a relational and a NoSQL cloud database

With their strong consistency, structured data integrity and transactional support, relational databases are ideal for situations where data integrity and order are paramount. They prove useful in scenarios requiring complex queries and precise data management. However, they face scalability challenges and are less flexible when the data constantly undergoes rapid changes.

In contrast, NoSQL databases offer unparalleled scalability and flexibility in data modeling. NoSQL databases are also more adept at handling unstructured data, making them suitable for applications that require rapid development and the handling of large volumes of diverse data types.

However, before you make the decision to migrate to NoSQL, you should note that, while they excel in scalability and flexibility, NoSQL databases often have a steeper learning curve due to the lack of standardization and may present challenges in ensuring data consistency and transactional support.

Top relational and NoSQL cloud databases to consider

No two relational or NoSQL cloud databases are the same. They are all unique and work best with particular use cases. Below are some databases to consider for your organization.

NoSQL databases

  • Amazon DynamoDB is best for organizations or projects that demand a highly reliable and scalable NoSQL database with minimal maintenance needs. It is commonly deployed in web applications, games, mobile apps, Internet of Things and numerous other applications.
  • MongoDB Atlas is a fully-managed cloud NoSQL service. It works best in applications that require a flexible schema for diverse and rapidly changing data formats, particularly in web and mobile applications and IoT.
  • Apache Cassandra is a good bet if you have a scenario that demands high flexibility and fault tolerance. It has been successfully deployed across multiple data centers and real-time big data applications.
  • Couchbase is your go-to NoSQL database if you deal with interactive applications that demand high throughput and low latency, such as mobile and edge computing.

Relational databases

  • Oracle Cloud is best for large-scale enterprise applications requiring robust performance, security and reliability.
  • Microsoft SQL Server is ideal for organizations looking for a comprehensive relational database solution with strong integration with Microsoft products and services.
  • PostgreSQL is well-suited to organizations seeking an open-source RDBMS with a strong emphasis on standards compliance and extensibility.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DataStax GAs ‘Data API’ for GenAI Application Development – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="AI / API Management / Cloud Services / Data / Large Language Models / Operations / Software Development“><meta name="x-tns-authors" content="“>

DataStax GAs ‘Data API’ for GenAI Application Development – The New Stack

New Languages for a New Year

Which language do you most want to learn in 2024?

Python

0%

Rust

0%

JavaScript

0%

TypeScript

0%

C++

0%

Java

0%

PHP

0%

Other

0%

No, I’m good.

0%

2024-01-17 07:15:26

DataStax GAs ‘Data API’ for GenAI Application Development

The new API is aimed squarely at JavaScript and Python developers building vector-first RAG/LLM generative AI applications.


Jan 17th, 2024 7:15am by


Featued image for: DataStax GAs ‘Data API’ for GenAI Application Development

Feature image by Gerd Altmann from Pixabay.

In this new age of generative AI (GenAI), NoSQL hybrid databases have new relevancy.

Enterprises need to encode unstructured data as numerical vector embeddings and GenAI applications need to perform similarity searches among those vectors to provide contextual information when sending prompts to large language models (LLMs).

This technique, known as retrieval augmented generation (RAG), has led a number of hybrid database platforms to host vector storage and search workloads. And while “pure play” vector databases are also competing in the market, the hybrid folks are out to prove that they are just as rigorous, while being far more versatile.

DataStax, the vendor behind the open source Cassandra project, one of the first databases on the NoSQL scene, offers both the customer-managed DataStax Enterprise platform and managed cloud service counterpart Astra DB.

On Wednesday, DataStax released to general availability its new Data API, aimed squarely at developers building vector-first RAG/LLM applications.

An API for GenAI

The New Stack spoke with DataStax chief product officer Ed Anuff, who gave us the lowdown not just on the Data API itself, but the specific context and inspiration surrounding it, as well.

Anuff set the stage this way: “As we looked at revving the API, we said, okay, we’ve got a lot of people building RAG applications that are definitely vector-first… [so] we said, can we use this as the basis of… a RAG-first API.”

The result is a new API for both JavaScript and Python developers that, according to DataStax’s press release, “provides all the data and a complete RAG stack for production GenAI apps with high relevancy and low latency.”

Based on the JVector search engine, DataStax says the new API provides up to 20% higher relevancy, 9x higher throughput and up to 74x faster response times than other vector databases. Perhaps the most revolutionary facet of all is that the new API was built to minimize the need for deep Cassandra knowledge in order to use it.

Explaining DataStax’s inspiration to design the API that way, Anuff told The New Stack that, during a period that began in the third quarter of 2023, “close to half of the people that would sign up for Astra DB, our cloud service, were doing so with no previous Cassandra experience and were just looking to build-RAG based applications.”

Anuff added: “For us, it was a wakeup call.”

Other GenAI/RAG Use Tools

Along with the API, DataStax is also launching an updated developer experience for Astra DB, which the company says adds a dashboard, data loading and exploration tools, and integration with leading AI and machine learning frameworks.

DataStax says the new environment simplifies integration with tools like LangChain, OpenAI, Vercel, Google’s Vertex AI, AWS, and Azure. In addition, developers can support RAG techniques, like forward-looking active retrieval (FLARE) and ReAct, that synthesize multiple responses, and do so with minimal latency.

Anuff explained that the new Data API can be also used as a general-purpose document API, despite its primary intent of enabling vector/RAG applications. For non-vector applications, the Cassandra Query Language (CQL) API remains in service.

Conversely, while Astra DB’s inclusion of Pulsar enables non-vector streaming data applications, it can also be used for the ingestion of vector data and the accompanying processing that may be necessary.

Anuff summed it up this way: “We do have a technology stack called RAGStack that…builds on top of LangChain, but also uses LlamaIndex as well as a few other open source packages,” adding that RAGStack “…handles the ingestion problem and Pulsar is one of the mechanisms we use for that.”

NoSQL’s Grown up

There was a time when NoSQL databases seemed like an oddity, with a raison d’etre of simply being unlike the relational databases that preceded them. They were rebels, often without a cause. They also lacked rigor: even if you could get past their lack of declarative query languages and inability to join tables, the absence of real indexing on anything other than a primary key made them more of a developer toy than anything. But NoSQL databases matured.

The query languages were built, and solid secondary indexing was added. And as each type of NoSQL database added elements of the other types, they really came to be hybrid databases for data of varying levels of formal structure.

Now, these hybrid databases provide more than a mere alternative to the relational model. Generative AI and vectorization make unstructured data more usable and more critical to business systems, and hybrid NoSQL databases are the optimal platforms to host and query this data.

Vector, graph and document data models are no longer just an intellectually interesting approach, but instead represent a necessary paradigm shift in database management. Previously dark data is now contextual data, and it’s indispensable in mitigating LLM “hallucinations” and creating better GenAI output overall. That means the databases and APIs needed to work with this data need to become mainstream developer tools.

That’s the wave DataStax is riding here, and a wave more developers can now ride as well.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.

TNS owner Insight Partners is an investor in: The New Stack.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Advisor Partners II LLC Has $393000 Stock Position in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Advisor Partners II LLC lifted its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 54.3% during the 3rd quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission. The fund owned 1,137 shares of the company’s stock after acquiring an additional 400 shares during the period. Advisor Partners II LLC’s holdings in MongoDB were worth $393,000 at the end of the most recent reporting period.

Several other hedge funds and other institutional investors have also modified their holdings of the stock. Raymond James & Associates boosted its stake in MongoDB by 32.0% during the first quarter. Raymond James & Associates now owns 4,922 shares of the company’s stock worth $2,183,000 after buying an additional 1,192 shares in the last quarter. PNC Financial Services Group Inc. boosted its stake in MongoDB by 19.1% during the first quarter. PNC Financial Services Group Inc. now owns 1,282 shares of the company’s stock worth $569,000 after buying an additional 206 shares in the last quarter. MetLife Investment Management LLC acquired a new stake in MongoDB during the first quarter worth $1,823,000. Panagora Asset Management Inc. boosted its stake in MongoDB by 9.8% during the first quarter. Panagora Asset Management Inc. now owns 1,977 shares of the company’s stock worth $877,000 after buying an additional 176 shares in the last quarter. Finally, Vontobel Holding Ltd. boosted its stake in shares of MongoDB by 100.3% in the 1st quarter. Vontobel Holding Ltd. now owns 2,873 shares of the company’s stock valued at $1,236,000 after purchasing an additional 1,439 shares in the last quarter. Institutional investors and hedge funds own 88.89% of the company’s stock.

Insider Activity at MongoDB

In related news, CAO Thomas Bull sold 359 shares of the business’s stock in a transaction that occurred on Tuesday, January 2nd. The stock was sold at an average price of $404.38, for a total value of $145,172.42. Following the completion of the transaction, the chief accounting officer now directly owns 16,313 shares in the company, valued at approximately $6,596,650.94. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. In related news, CAO Thomas Bull sold 359 shares of the business’s stock in a transaction that occurred on Tuesday, January 2nd. The stock was sold at an average price of $404.38, for a total value of $145,172.42. Following the completion of the transaction, the chief accounting officer now directly owns 16,313 shares in the company, valued at approximately $6,596,650.94. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, CEO Dev Ittycheria sold 100,500 shares of the business’s stock in a transaction that occurred on Tuesday, November 7th. The shares were sold at an average price of $375.00, for a total value of $37,687,500.00. Following the completion of the transaction, the chief executive officer now owns 214,177 shares of the company’s stock, valued at $80,316,375. The disclosure for this sale can be found here. In the last three months, insiders sold 147,029 shares of company stock worth $56,304,511. 4.80% of the stock is owned by company insiders.

MongoDB Price Performance

MongoDB stock opened at $405.44 on Wednesday. The firm has a market capitalization of $29.26 billion, a P/E ratio of -153.58 and a beta of 1.23. MongoDB, Inc. has a 1 year low of $179.52 and a 1 year high of $442.84. The business has a 50-day moving average of $399.14 and a 200-day moving average of $380.69. The company has a current ratio of 4.74, a quick ratio of 4.74 and a debt-to-equity ratio of 1.18.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Tuesday, December 5th. The company reported $0.96 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.51 by $0.45. The company had revenue of $432.94 million during the quarter, compared to analyst estimates of $406.33 million. MongoDB had a negative return on equity of 20.64% and a negative net margin of 11.70%. The company’s revenue for the quarter was up 29.8% compared to the same quarter last year. During the same period in the previous year, the company earned ($1.23) EPS. On average, equities analysts anticipate that MongoDB, Inc. will post -1.64 EPS for the current year.

Analysts Set New Price Targets

Several brokerages have recently commented on MDB. Barclays boosted their target price on shares of MongoDB from $470.00 to $478.00 and gave the company an “overweight” rating in a research note on Wednesday, December 6th. Piper Sandler boosted their target price on shares of MongoDB from $425.00 to $500.00 and gave the company an “overweight” rating in a research note on Wednesday, December 6th. Truist Financial restated a “buy” rating and issued a $430.00 target price on shares of MongoDB in a research note on Monday, November 13th. Needham & Company LLC boosted their target price on shares of MongoDB from $445.00 to $495.00 and gave the company a “buy” rating in a research note on Wednesday, December 6th. Finally, KeyCorp cut their price target on MongoDB from $495.00 to $440.00 and set an “overweight” rating for the company in a report on Monday, October 23rd. One investment analyst has rated the stock with a sell rating, three have given a hold rating and twenty-one have assigned a buy rating to the stock. According to MarketBeat, MongoDB presently has a consensus rating of “Moderate Buy” and an average price target of $430.41.

Check Out Our Latest Stock Report on MDB

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon ECS Integration with Amazon EBS for Data Processing Workloads and Flexible Storage

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS recently announced that Amazon Elastic Container Service (Amazon ECS) supports an integration with Amazon Elastic Block Store (Amazon EBS), which makes it easier for users to run a broader range of data processing workloads.

With the integration, users can provision Amazon EBS storage for their ECS tasks running on AWS Fargate and Amazon Elastic Compute Cloud (Amazon EC2) without needing to manage storage or compute. Moreover, users have various storage options for their containerized applications running on Amazon ECS. By default, Fargate tasks come with 20 GiB of ephemeral storage. Still, users can configure up to 200 GiB for tasks requiring extra storage, such as downloading large container images or temporary scratch work.

In addition, Amazon ECS allows configuring Amazon Elastic File System EFS for applications requiring concurrent access to a shared dataset, suitable for workloads like web applications and machine learning frameworks, while supporting simultaneous attachment to multiple tasks across a region. Alternatively, for applications needing high-performance, cost-effective storage exclusive to individual tasks, Amazon ECS enables the provision and attachment of Amazon EBS storage, known for low-latency, high-performance block storage within an Availability Zone.

Users can leverage the EBS volume integration to their ECS tasks by using the option to set the volume mount point for their container in the task definition and specify Amazon EBS storage requirements for their Amazon ECS task during runtime. In most scenarios, getting started involves merely indicating the required volume size for the task, with the option to configure all EBS volume attributes and the desired file system for formatting the volume.

Create Task Definition in the AWS Console (Source: AWS News blog post)

A respondent, Zenin, questions the storage on a Reddit thread:

Yet, it’s not exactly persistent storage, correct?  If your container crashes and gets started on a new node, it tosses the EBS volume and makes a new one from scratch, right? So, is this really a way to get a big scratch space or, with a snapshot, a big, predefined data set loaded, but not something to use with, say, a database? Or am I misreading this?

With an AWS employee responding:

That is correct. Currently, the primary use case for EBS volume attachment to ECS is for getting large amounts of data to your task quickly and efficiently. You should find it to be much faster and more performant than trying to stuff a lot of data in a container image, or download it on the fly after the task starts. We know we have more work to do in order to enable truly stateful services, which is why EBS volume reattachment is not yet part of this launch.

The Amazon ECS integration with Amazon EBS is currently available in nine AWS regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). Costs involved in using the integration depend on the usage of Amazon EBS (volumes and snapshots) – details of pricing are available on the Amazon EBS pricing page and Amazon EBS volumes in ECS in the AWS documentation.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: How to Apply a Product Mindset to Your Platform Team Tomorrow

MMS Founder
MMS Jelmer Borst

Article originally posted on InfoQ. Visit InfoQ

Transcript

Borst: My name is Jelmer Borst, working for Picnic Technologies as a PM for our platform products. I would love to talk about how to apply a product mindset to your platform product tomorrow. Some of you might know Ben Hunt-Davis. He’s the coach of the UK rowing team. He primarily focused on, will it make the boat go faster? As a result of him going into the Olympics, they had a big challenge in order to figure out how to really become fast. His main thought was, if we apply all the different elements, the idea of making the boat go faster, so for every choice you had to make, the question was, should we do this to make the boat go faster? Very similarly, that relates to how platform products work. Namely, what can we do to make the boat go faster? What are the elements we can do for all the other product teams out there to accelerate them? Very often, you start like this. Especially coming from startups, in my case, six years ago, when Picnic was still a small company with only a handful of engineers, and now being a size of 35 product teams, is that you start with building all these awesome, new features. This goes super well. You’re scaling up. You’re interviewing your customers. You’re figuring out what to build, and you keep building more features. That goes actually well for quite some time.

As many teams know, very soon you start looking into things like technical debt. Technical debt is often taught in the elements of architecture. What is holding us back? Because we implemented this feature, but now we have more requirements coming up, we have this new idea that we can skill the better. How we implemented the new architecture that we have, this has allowed me to skill. Or maybe we need to do some refactoring in order to actually have a better experience as a developer. Very similarly, that also starts applying to maybe your CI system or your CD system. How you’re going to do a release is maybe not so important if you’re just a handful of engineers, so let’s say a team of five. It starts becoming very important if you’re a couple of hundred or a couple of thousand engineers who want to release many times on a given day. Especially you’re scaling your total tech team. In order to solve this, you think, let’s start a platform team. Let’s start to have this team that focuses on solving a lot of these common problems. That isn’t easy. You may have super smart engineers working on this platform team who’s trying to build the best possible stack for the others to build upon. How that’s being used is not always easy, especially if you’re in multiple countries, or you may be having a remote team, or maybe there’s just a lot of engineers, they have a lot of product teams, you might not actually know how your product is actually being used. Therefore, maybe it’s gut based, or maybe it’s what some other developer told you, where you’re deciding, should I build feature A or feature B, or should I improve my system in a certain way?

I love this small comic, where they’re going through this process where, we need to make 500 holes in the wall, so I’ve built this automatic drill. It uses elegant precision gears to continually adjust its torque and speed as needed. You’ve designed this amazing system that works so well for what you think it requires. You might even think we’ve potentially over-engineered this. Where the other guy says, great, it’s the perfect weight, we’ll load 500 of them into a canon and we just shoot them at the wall. They completely ignore your completely over-designed automatic drill. He just wants to accomplish by demolishing that wall, or basically getting in that. We see this on a daily basis where, a lot of other engineers don’t care about how maybe you build, or test, or how you release, or how we could, in the ideal scenario, get a couple of megabytes down from your Docker image or something. Where they just want to ship features. That means that we need to think about how can we enable them to do this best, and not by building over-engineered solutions that in the end doesn’t actually help them. In this example, you could just have given them a couple of stones or bricks to solve the same problem, and have a ton of time available to actually do something else.

What Is Platform Engineering?

To maybe start off with, what is actually platform engineering? We just said, let’s build this platform team. It seems to be coming up more with different companies doing this. We have a central team that is now going to solve all our problems. What is it actually? As a definition, platform engineering is the discipline of designing, building toolchains, workflows that enable self-service capabilities for software engineering organizations. Let’s start breaking that down. That means we’re designing something. We are building something for toolchains and workflows. We’re primarily going to focus on the designing part. Figuring out, what should we build? How should we build this? How is this going to be used, so that you actually figure out the best solution? Then the second part is actually building those solutions. Most platform teams, or these central infrastructure teams are really actually super good at building. That’s usually not really the problem, therefore, we would love to focus primarily on the former.

Then the second part is super interesting as well. We’re talking about toolchain, so internal tools, often. Secondly, there’s also external tools that you’re using as an organization. Whereas maybe on the get-go, you’re not thinking this is your job to actually manage or use them or configure them appropriately, because it’s only being used by two out of these many teams. There’s maybe not too much use of it. However, it can actually be quite crucial for the success or the productivity of all our engineers to also include external tools. Similar for workflows. Very often, we think about platform engineering being just on tooling, but actually, there’s a huge focus also on workflow. How is one of my internal-external tool actually being adopted? Also, what is the process of the different product teams out there that are actually using my tools on a daily basis? Not only the tools that you provide, but also how do they run in general, their weekly, monthly, quarterly processes? Whereas in many cases, there’s actually a lot of things you can do to accelerate teams by not actually doing something around tooling, but actually focusing on their processes and help them.

Product Mindset for Platform Teams

I would love to give some ideas of just product mindset for platform teams for everyone out there. On one hand for leadership, who’s figuring out how can I actually get the most value out of these platform teams? Maybe you already have a platform team or an infrastructure team, but you feel you could do so much more, or you feel they’re not going quick enough or not having the impact that you would expect? Or maybe you’re thinking about actually starting a platform team. It is also for engineering managers who want to figure out, how can I empower my engineers on my platform or infrastructure teams to better decide and improve their product, and improve their way of working? You see more platform teams who have a product manager either part-time or full-time on their team, and especially for companies where this is newer, maybe scaling up, you’re starting in this team, and maybe internally you’re transitioning into this role. Suddenly, now that means actually that you have quite a different product at your hands. Maybe you’re quite new, either as a PM, or maybe just new in that particular role, so how can you find your way in this overwhelming space? Then the last one, I think, which is super good to call out is engineers who are on a product team where maybe you have a PM, maybe you don’t have a PM, where you want to improve your impact, you want to improve what to work on, both on a platform team, but also actually on any other team in your company. Where very often the PM on your team is maybe figuring out together with their stakeholders what to build, and maybe you feel that that is the [inaudible 00:11:37] you need to deliver on, whereas, actually, there’s a lot of value in accelerating your own team as well. Thinking, what are the things you think about to understand where you could improve as a product team as being an engineer or maybe a technical lead of that team.

Very high level, I want to give you five different takeaways you can actually use. First of all is stop building. This seems very counterintuitive, and I don’t mean, go on a strike, stop doing literally anything you’re doing. Stop and think a moment and start talking to your users instead. Secondly, is aligning on the company’s strategy. Understanding how is your team fitting into the direction of the company, such that you can have maximum impact as a team, within the scope of your company. Third is to finally explain also your team value to upper management, to other product teams, for them to understand what you’re doing, and how they can also use and leverage you for their own successes. Fourth is looking into measuring what matters and ignoring everything that does not. Figuring out what to measure, how to even conceptually think about this. There’s so many different articles and books floating around, and measurements and metrics, and it’s so easy to get stuck into the gazillion number of ways of measuring. How to do this within the scope of your product. Lastly, and which we should never forget, is iterating and celebrating all sort of successes.

1. Do Not Build

To start off, I would like to call out to stop building. As an engineer or as a team, and even for PMs this is sometimes actually quite hard to do, is you have this awesome idea in your head. You’re working on a day, you’re hitting a certain issue, or maybe you had lunch with a friend, and you’re discussing a tricky issue. As a result, you’re very prone to actually start solving that. As engineers we love to solve and build solutions. You’re prone to actually figuring out, how can I open up a pull request to make this change? It only maybe takes you an hour or two, and we’re helping that one person on this other team. Where very often because we’re so prone for action and trying to solve that particular problem, we’re not really solving the large problem at hand. Or maybe we’re only solving for one person we’re not solving for basically all other product teams. As a first one is to start talking to people. You want to start talking to your CTO or your CPO to understand what they see as being main struggles. Please don’t stop there. In some cases, we get this top-down, this is an issue, where should we start? We should solve this. Of course, that usually means we should absolutely look into it, but again, don’t start building straightaway. Secondly, start talking to your users. Your engineers, technical leads, engineering managers, architects, other product managers, what is holding them back, what is causing them issues? Having these interviews. Do this structurally. Don’t just have a chat, put some stuff in your brain, but really have a focus on building up the structured interviews where you’re going out to your users. Separate them from junior engineers to senior engineers, to see what type of problems they are having. People who are very new at your company versus people who are actually very long at your company. They all have different problems that they would love for you to solve for.

Many teams are still in this ticketing mode, where another team is hitting an issue, they’re trying to solve something. That just comes throughout the quarter, they’re trying to deliver on their own roadmap. Now they want you to support whatever they are focusing on. They create this ticket, and in some cases, it is like an actual ticket for maybe an infra team to pick up and solve. In other cases, it’s not really like a Jira ticket, but it’s actually a feature request that comes through and suddenly has this very high pressure to solve. We all know these examples where they come in, we need you to solve this and support this in the next coming days, because we’re blocking this project, and we have so much pressure to deliver, and we really need to do this. While you would obviously love to help them, maybe you’re cutting corners to support a feature where this would actually be very unrealistic to be able to do actually in a couple of days. How can we start thinking about all of this more proactively? Apart from making your own work a lot more structured, it also means that it’s not only about the most vocal engineers. If others are hitting roadblocks, they’re reaching out to you. They are the most vocal engineers that are unafraid to step up. They are also actually the ones that may be either are a little bit more senior, which is often the case, or perhaps, let’s say, some who are just very loud with sharing the ideas of how something should be done, or what is required. I’m sure it is very recognizable for a couple of people that you come across very often. Whereas, actually, you want to not solve for the most vocal engineers, or vocal people, or maybe your CEO, but you want to solve for everyone so that you can have the biggest impact.

It’s very cliche, but start designing your personas, who are your users? Say you’re on a team who’s maintaining and improving the company CI system, and you have these different audiences within your company. On one hand, we’re supporting Java engineers, QAs, and mobile engineers. In this case, so we have only, let’s say, a backend in Java, we have only a mobile app that mobile engineers are working on, be it iOS, Android, or maybe something like React Native that is being supported. We have QAs to support ensuring quality that we’re putting live. Your CI system is being used by effectively all three, either directly or indirectly. Next to defining who are you effectively building for, it can also be helpful, who are you effectively not building for? Especially if there are some users who are somewhat involved, maybe from time to time, they might use it, to make this explicit that you’re not building for them. It can actually be very fruitful to also articulate to them, I know you’re also using our system from time to time, but you’re not our primary audience, and that is why we are solving these needs. Then you can have the discussion if they agree and if that makes any sense.

As a second step, don’t use these imaginary personas. Start tying them to actual people. These are your most advocates that you want to look out for. If you think about the Java engineer in your company, who is that? Who is somebody who is already used to thinking about how we can improve? That person is very helpful, maybe in collaboration, maybe you’re working on some pull requests with them before. Here we have, for example, Steve on the payments team, that you could work with a lot for improving CI for Java. Then we have Abby on the promotions team, and Liza on the webstore team. Making these actual people allows you to have that discussion with them on certain features, allows you to have this person that you can first start using with interview, but also has a sparring partner as you go. People have opinions, and that can be actually quite scary. As Kathy Korevec said, you’re a chef cooking for other chefs. You are an engineer building for other engineers. That can be rather scary, because they will have opinions of what you’ve built. They could have theoretically maybe also built this themselves, so that means that very often they will voice their concerns or maybe voice their opinions that maybe is not good enough. Really try to embrace this. Really try to focus on getting their input as much as possible. If it’s not good enough, and if they are very vocal about what you’re putting out, that is actually super good.

2. Align on Your Company’s Strategy

Secondly, is aligning on your company’s strategy. Understanding what the success of the company depends upon. Do you maybe have large productivity issues? That means that engineering enablement becomes actually key for your company to succeed. Maybe there are hiring challenges. Maybe you can’t offer the same compensation as the FAANG companies out there. Or you might have scaling issues as you go. There’s exponential forecasted growth, so how can you from system side properly support this? Or even reliability issues that you have, primarily, maybe you are building the SaaS solution, where towards your customers, you want to make sure reliability is key. That might be actually a theme at the moment. This seems actually very easy in the sense of, obviously, we have these issues so therefore we should do work on them. Trying to think of, as your company to succeed, as you in a year’s time or a couple of years’ time what becomes really the elements that you as a platform team, or engineering enablement team, or infrastructure team can really do to enhance and empower that. Relating that to your team and explaining that can actually help on one hand getting the right resources in place to do that, but also actually means you start having a bigger impact. Also, for yourself, actually that recognition or that idea of success can actually help a lot in how you like doing your work, and how much you can do.

For Picnic, we are an online groceries company, our model is where you can order today and we deliver it tomorrow. This is not your quick e-commerce that you order now you get it in 15 to 30 minutes. This is really for your weekly groceries. We’re super customer centered, to reduce the time that you’re waiting in line. Really getting the boring elements out of somebody’s life, because nobody enjoys sitting in traffic after work to stop by the grocery shop. Nobody loves spending their time because you’re going to want to be with family, with kids doing other joyful things or hobbies that you want to do, where groceries for most people is not one of those. We have this huge sustainability potential by drastically reducing waste, as we can optimize for what customers want to buy, or what customers are buying instead of trying to predict what they might want to be buying. In order to do this, this is actually not very trivial, so we need to build a lot of internal systems to make this work. Namely, as a supermarket, our margins are razor thin, and in order to still be able to have this whole model to work, we need to basically build every essential piece of tech ourselves. We have our own customer app, which is very sensible, so users can download our app, log in, and order their groceries. However, the entire supply chain is also managed by us. We have our own ERP, our own users, deliveries, orders, we have our own management warehouse system, we have our own route optimization software, or delivery app, our forecasting. We have stellar customer service that we built a lot of it ourselves as well, to actually make it a lot more customer friendly, compared to the most traditional support. It’s out there. The cherry on cake, we launched this automated warehouse about a year ago, with all software built internally.

What does this really mean? This is a nice story about my company, about Picnic. How does this relate to thinking about strategy? First of all, this means we have many internal products with high complexity and a large domain. This actually leads to understanding, what are the elements you should be doing as this platform team to be able to empower this? You have on one hand engineers who are very deeply focused on their own product, which means that actually linking it to, or making it easier for them to get really into the nitty-gritty to solve for their problems is key. It also means we have a ton of different systems out there, where you’re working with other services that you’re not so familiar with. That actually also requires quite a bit of domain, not sure how are you navigating that space. What if you have an incident, or what if you want to integrate with this other system? Understanding how your landscape from a tech point of view works is actually quite key for your product, to understand where you can actually have the most impact.

Somebody made this analogy of customer facing product managers versus platform product managers, where, on one hand, the customer facing PM has this focus for this one goal. You have this key metric that you’re trying to solve, or this key need of a user that you’re really trying to optimize. As a platform PM, sure you’re building for your users, but actually you’re building the right tools and workflows to achieve the customer facing PM’s goal, because it’s not about your goal. Because if your platform team is doing amazing, but the company is still going down, then you’re not doing a good job. In the end, you’re trying to solve for the whole company. It means this becomes more a SimCity exercise where you are trying to figure out what they want to achieve such that you can actually support them the best that you can. To me, it’s similar to having the NorthStar metric where the customer facing PM is trying to build towards and working as best as they can to reach their destination. The analogy is, if you’re hiking in the mountain, they’re trying to hike as fast as they can in order to reach their destination. As a platform, you have this different view, where you’re standing behind them and see them walking there. You need to think of, what can I do to help them reach their destination faster? Now you’re thinking about, I maybe need to build or invent a compass such that they will reach their destination faster, or maybe having some form of maps that we can actually start really using. Obviously, this sounds very straightforward and easy. Having this separate view to understand what are they doing, where do they want to go, so what can I do to help them?

3. Define and Explain Your Team’s Value

Thirdly is to define and explain your team’s value. We were talking earlier on about talking with your users, and internal partners are effectively a little bit more than just your users. You have a customer base of people who are using your services, as a platform team, you’re really trying to look for these internal partners that you can actually start working with. It changes from this question-answer type of mode, or doing this interview, really to do collaboration. Remember these people that you have defined as your personas, these are the perfect people you can actually use as your internal partners. That is on one hand for them to collaborate with you to solve something. At the same time, this helps a lot for them to understand, what is the value that you’re adding to the company? When should they reach out, when they could actually use you? Instead of trying to solve something themselves, they could now start reaching out to you to solve their need.

Secondly, is also trying to educate your leadership, being this voice of your users towards also leadership. Why is it complex for them to build their solutions? What makes the software delivery hard? What are the challenges others are facing, day-to-day? What are the bottlenecks that they have, that you can solve for and that you can help them out with? What you see very often is that many people or many engineers in your company, are having these issues that somehow do not bubble, or do not nicely get prioritized, or are not greatly voiced or articulated, because it’s so distributed because you have different pockets with different people having different problems. At the same time, they might not always actually voice this in maybe a public channel or otherwise. Also, it is very often this death by a thousand cuts, so all these small little issues that are, on its own, not too much of something, but it’s really this cry for help. On a whole, all these small issues actually lead to quite some impact to you as an organization.

Then, also, manage expectations. Especially if you’re starting out a platform team, you might not have solved all the problems tomorrow, even though you think, now we have this couple of people focused on it, then easily we should be 10 times faster tomorrow, or let’s say in a week or a month’s time. It’s quite a long-term investment. It has an amazing high return on investment, but you are doing something for the long term. The fact that you’re building something now will help, let’s say, building that new feature tomorrow, next week, or next month. However, even though it has this longer-term investment, don’t use it for yourself as an excuse actually to go slow, which is the tricky part. On one hand, don’t try to oversell that you have solved all the problems tomorrow, but at the same time, also, for yourself still set the bar high to improve.

4. Measure What Matters

One of the key elements of being a product team is to measure what you’re doing, and for that reason to iterate and learn and understand if you’re adding value, and articulate for issues that you have. What you see in many companies is that product teams start either measuring literally everything that you could potentially be measuring, or are maybe measuring very few. You really want to focus on a couple of key metrics, and really start ignoring everything else. It’s so easy for people to come around and ask you for this one single feature, or one single request, or the small thing that you might need. Like, yes, we can probably support it. Even though it might actually not help getting you towards your goal. Stop doing favors. If it doesn’t make the boat go faster, just don’t do it.

In many product teams where people are talking about platform teams, very often come out and say let’s just use DORA metric. I know DORA is already a little bit outdated in this regard, but you still see this so often, where the DORA metrics solve for. For those who have maybe lived under a rock and have never seen this, it has these four key metrics. Over time, they also added a fifth and a sixth metric. Primarily, it’s about deployment frequency. It’s lead time for change. How quickly is your commit in prod? The time to restoring your service, often also measured as mean time to restore. If you have an incident, how quickly do you actually resolve this issue? Your change failure rate, so how often do you actually have issues in production? I’ve seen these four metrics which are related to a company’s success from an engineering point of view. Many would argue, let’s just start there, throw it on there and start optimizing. The question is, really, what are you optimizing for?

Don’t start with the metrics. It’s so easy to think about, we should track uptime, or we should actually start increasing our deployment frequency, because that’s what DORA says. Or maybe you’ve read an article from another company who are doing this, and they’re doing it well. You may need to, but it really depends on what you’re trying to solve as a company. If deployments are really holding you back, you might actually want to solve for deployment frequency, but maybe that’s actually not your issue at the moment, and you should actually be solving maybe for reliability, or something else. First, really start with the strategy from your company, but also strategy from your team. What problem are you really trying to solve here? Then, secondly, what do we aim to achieve with it?

There are four areas that you can think of, that you want to measure for. On one hand, measure to plan. That is really your product impact, your mid to long-term metrics, where maybe this could be issue lead time, or maybe the number of projects that are on time. Really, not something that you will change, influence directly tomorrow, but something that’s a little bit over a longer period of time. Let’s say maybe the six-month timeframe. Secondly, is measuring for the board. Your CEO or the finance team, or maybe your CTO or CPO that are interested in the impact that you’re making. That can be from infrastructure costs, or to sort of a company might change its shipped to the metrics that for them give an understanding of what you are delivering in terms of value directly to the company. Obviously, this relates to the part we already talked about, in allocating them, where, let’s say, talking about cost is a very easy one that you can start with, because your finance team wants to maybe reduce cost. As you go, you might want to actually include some other metrics, as you allocate them.

The third is to measure to optimize. This is really for your team, internally, where you want to verify the experiments, like the features that you’re building, and you’re launching, where you want to see actually some meaningful improvements. This could be incident rate. This could be maybe the number of canary releases that you’re doing. That really depends on the challenges that you have and that you’re solving for. Then, the last category is to measure to operate. This is very in this one-to-two-week cycle where you want to think about this as observability for your product. Maybe it’s uptime error rate, maybe it’s other things. Really something that is fluctuating where you would really see as the solutions that you’re building, or maybe problems that come up that you maybe need to respond for. These on one hand can be metrics for your own team, but very often these are measures for the whole organization. This is why on one hand this is not easy. Also, this is actually quite crucial to get this right.

Having these four categories in mind, how can we actually define the metrics that we need to go for? A very helpful way of thinking about this is to start with outcome, so really this NorthStar metric of your company. Secondly, output, which are lagging metrics, which are directly related to solving that NorthStar, but still lag quite a bit. This is not a metric you will directly influence tomorrow. However, these are driven by input metrics, these are leading metrics that you can actually influence with projects that you’re doing. I put here an example for an e-commerce company, like Picnic is, that you have your number of deliveries which is our NorthStar where we are trying to optimize it and where we are, and as Picnic is growing, where it is primarily driven by first order conversion and retention. These are very lagging metrics. This is a result of so many things that is happening that result in this first order conversion, or this retention of our customers. Retention might be actually driven very much by repurchases, maybe recipes, and some others.

If we’re looking at the leading space, or input metrics is potentially, or article ranking, or our search, which is actually driving the repurchases that customers are doing. These would be our leading metrics. These are the ones we can actually directly influence. Very similarly, we can also do this for platform teams. The number of deliveries. We still are not suddenly solving for a different outcome, it’s still the same outcome that we were solving for, because we are looking for the success of the company. We’re still focusing on number of deliveries, but our output metrics start changing. Our lagging metrics could be maybe quality of service, and maybe your iteration speed, which is very high level. The faster we can actually iterate, the faster we can actually grow. The faster the other product teams can actually put improvements down that will actually increase the number of deliveries. That is mainly driven by how quickly we deploy, your issue lead time, and all kinds of different factors that will contribute to it. Looking at those, which ones are important? Perhaps deployment frequency is actually one that is actually driving your iteration speed and is actually rather low, instead of trying to push for improving and start understanding the underlying drivers. This is driven by batch size, this is maybe driven by the number of canary releases you have, that is only done in a very small portion of your teams. These are metrics you can actually influence directly with different projects you’re doing, either on the adoption side or actually on the tooling side.

Now that you figured out a way how you can define the metrics that you should be using, the question is, how should you actually use them? Build a habit to review metrics periodically, so to plan every quarter. For the board, it’s roughly every quarter. Optimize every two to four weeks. Operate every week. You have this different cadence for the different type of metrics out there. Build hypotheses why these metrics change. Verify your hypotheses with your internal partners to understand, I see this issue happening, does it actually make any sense? I think it’s due to this. What do you think? What are you seeing over on your databases? Now that also, because very often these metrics are quite aggregated, start slicing and dicing, make segments of different users, different tenure throughout your company, to understand what’s happening. Lastly, align these metrics with reality. If in reality, there are issues, but you don’t see this in your metric, something is wrong. If your metrics are showing issues, but in reality everything seems fine, it’s very likely also something is wrong.

5. Iterate and Celebrate Successes

Now’s the time to really put everything together. You are measuring the key metrics for your product based on iterating there. You are learning what is happening. You are talking to your users. You start understanding fluctuations, what is driving your overall goals. Based on that you can start building improvements. That is actually not necessarily easy. This iteration cycle is required to make sure what you’re building actually matches with your user expectations. Don’t throw your product over the fence, where now I’ve built this improvement in the CI system, please adopt. Tools are amazing, but without adoption they’re nothing. We too often actually try to use developer tools as spaghetti. We throw them at the wall, and we try to see what sticks. In our track, there’s also this great talk by Olga, who’s focusing actually on adoption. You might want to check that out if you’re more interested in this area.

Lastly, share success stories. Have regular updates, for example, Slack, or your internal communication system. Maintain a changelog of what you’re building, for people to go back and understand what you’ve done. Join all-hands to share the wins, but also encourage your users to share their stories, who are using your product. Lastly, be transparent also on your challenges. Nothing is going to be easy. Don’t try to show that everything is going super well, if it’s not. Be transparent on what the challenges are and what you’re trying to solve as a team but also as a company. Where that helps for others to also understand where you’re going, where is your focus as a platform team, to making sure that that is also aligned. Those were my recommendations of applying product mindset for your platform team. Don’t start building straightaway. Align on your company’s strategy. Define and explain your team’s value. Measure what matters. Iterate and celebrate your successes.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET MAUI Community Toolkit 7.0.0 Aligns to .NET 8

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

On November 15th, 2023, Microsoft announced version 7.0.0 of their open-source MAUI Community Toolkit. The new version adds support for .NET 8 and brings several bug fixes.

.NET MAUI Community Toolkit (NMCT) is one of Microsoft’s .NET community toolkits hosted on GitHub, covering the MAUI developers. Their purpose is to let the community contribute useful code missing from official frameworks. The community toolkits are released as open-source software, and they encourage the developers to submit their contributions. Some toolkit additions can be later promoted into the official Microsoft libraries.

MAUI is an acronym that stands for Multiplatform Application UI. According to Microsoft, it’s an evolution of Xamarin and Xamarin Forms frameworks, unifying separate target libraries and projects into a single project for multiple devices. Currently, MAUI supports writing applications that run on Android 5+, iOS 11+, macOS 10.15+, Samsung Tizen, Windows 10 version 1809+, or Windows 11.

An updated version of NMCT was released the day after the .NET 8 official launch, as it happened with the launch of .NET 7 in 2022. Version 7.0.0 of the toolkit brings support for .NET 8, as expected. It was followed by an update with version number 7.0.1 in December 2023.

The new features in the new toolkit version are small enhancements instead of big features. For example, adding support for using the platform-native UNUserNotificationCenter when running on iOS 17 or higher. On Android 29 or lower, there was an issue when determining the height of the status bar for correct text wrapping. This issue is now fixed. Following Microsoft’s recommended best practices for asynchronous code handling in .NET, all methods in the toolkit that return a Task or a ValueTask now also allow for passing a CancellationToken parameter to propagate a task cancellation.

Several dependency versions were updated. One direct dependency for Samsung Tizen (SkiaSharp 2D graphics library) was removed because it was a stop-gap fix for a security vulnerability of an older library version being indirectly referenced. As Tizen’s official UI extension package upon which NMCT is now updated to use a safe version of SkiaSharp, the direct dependency is no longer needed for the toolkit.

The .NET MAUI Community Toolkit is versioned in different releases. In 2023, the versions grew from 3.1.0 in January to 7.0.0 in November. The reason for this multiplicity of version numbers is the presence of breaking changes, which force a major version number increase.

The updated documentation for the .NET MAUI Community Toolkit is available on the Microsoft Learn website. The project repository is hosted on GitHub and currently has 90 open issues.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (MDB) Presents at 26th Annual Needham Growth Virtual Conference (Transcript)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDB) 26th Annual Needham Growth Virtual Conference January 16, 2024 10:15 AM ET

Company Participants

Michael Gordon – Chief Operating Officer and Chief Financial Officer

Serge Tanjga – Senior Vice President of Finance

Conference Call Participants

Mike Cikos – Needham & Company

Mike Cikos

Great. Thank you to everyone for joining us today. My name is Mike Cikos. I am the Lead Analyst here at Needham covering infrastructure software. With me, I’m pleased to say we have the management team from MongoDB, CFO and CEO, Michael Gordon, as well as the SVP of Finance, Serge Tanjga. Thank you to both you guys for joining us today. We really do appreciate it as part of the Needham Growth Conference.

Michael Gordon

Thanks for having us, Mike.

Serge Tanjga

Thank you, Mike.

Mike Cikos

And I know we’re going to be tight on time here, so we’re just going to just tackle it right up front. But, one of the things I received a decent amount of inbounds on, obviously, has been the December security incident. Just to kick it off, can you kind of put any parameters out there as far as how extensive the unauthorized access was with respect to the customer base, as well as just set the table? I’m sure not everyone was following the blogs like I was, but anything else you could put around, that would be great.

Serge Tanjga

Sure. Yes. Thanks again, for having us. Happy to dive right into it and let’s start with that. So, as we shared, we were the subject of a phishing attack that gained access to certain of our corporate applications. The unauthorized, person got access primarily to customer contact info and other account-related information. We found no evidence of unauthorized access to Atlas clusters or the Atlas authentication system. Those are two different systems. So that’s sort

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB ticks higher amid takeover speculation (NASDAQ:MDB) | Seeking Alpha

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

@Stock Scanner You may want to check the FCF. It’s ballooned to almost 100M and rising fast as the company starts to achieve operating leverage. The losses of past were strategic investments to balance growth and investment for the future because the opportunity was so large. Very smart management. This company has gross profit margin of 75% and rising every quarter since end of 2021.Furthermore actual EPS is positive. They earned 81 cents in 2023 and 2024 estimates are $2.91. Important to note MDB has consistently beaten estimates so wouldn’t be surprised to see $3.00 to $3.55 in EPS for 2024. That doesn’t make the stock cheap today unless you consider the long runway of double digit growth and compounding. Also AI may provide accelerated revenue growth. These fast growers with very large TAM’s are difficult to value in their hyper growth phase. Is today’s price too high? Maybe and maybe not. But if you are patient and MDB continues to execute as well as it has been and you let the magic of compounding do the heavy lifting this likely is a market beater over the next 5 and 10 years.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The No-Nonsense Guide to Bypassing API Auth Using NoSQL Injection – Security Boulevard

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Introduction

Sometimes, the way to bypass API auth is easier than you think. That’s all thanks to modern software development and the exponential growth of web services and cloud-based applications.

Let me explain.

APIs (Application Programming Interfaces) serve as the backbone for the seamless interaction between different software applications, systems, and services. They enable the integration of functionalities and data exchange, playing a pivotal role in digital ecosystems, from web applications to mobile apps and cloud services.

The rise of NoSQL databases has been a significant development in this landscape.

Unlike traditional SQL databases that use a structured query language for defining and manipulating data, NoSQL databases are designed for specific data models and have flexible schemas for storing and retrieving data.

This flexibility makes them well-suited for handling large volumes of unstructured data, which is increasingly common in big data, real-time web applications, and the APIs driving these applications.

Popular NoSQL databases like MongoDB, CosmosDB, Cassandra, and Couchbase have become the backbone for many applications due to their scalability, performance, and ease of use.

However, this shift also brings new security challenges.

NoSQL databases handle queries and data differently, requiring a fresh approach to securing databases and the APIs that interact with them. The traditional security measures and tools designed for SQL databases are often not directly applicable to NoSQL environments, leading to potential vulnerabilities like NoSQL injection attacks.

In this article, we will explore how you can check to see if it’s possible to bypass API auth by injecting potentially malicious data into the login payload.

Here we go…

Understanding NoSQL Injection

NoSQL injection is a type of web application security vulnerability that allows an attacker to manipulate the queries that are passed to a NoSQL database.

Unlike traditional SQL injection, which manipulates the SQL query using harmful input data, NoSQL injection targets the database query itself. This is mainly due to the usage of API calls or object literals to query NoSQL databases instead of the standard SQL syntax.

Given their flexible schema, NoSQL databases are particularly vulnerable to these kinds of attacks, emphasizing the need for stringent data validation and effective security measures.

Where can NoSQL injection be abused?

Common scenarios where NoSQL injection can be exploited in APIs often involve areas where user-supplied input is improperly sanitized and is directly used in database queries.

For instance, authentication mechanisms can be prime targets if they rely on user-provided credentials to validate access. An attacker could exploit this by injecting a NoSQL query that manipulates the authentication logic, allowing them to bypass API authentication.

That’s what we will be looking at today.

Other vulnerable points include APIs that perform search operations, update user profiles, or any functionality that involves direct user input. All these scenarios underline the criticality of validating and sanitizing user inputs to ensure they do not contain malicious NoSQL queries.

It’s a regular theme around here. Taint all the things… because developers keep forgetting to sanitize their inputs.

Identifying Vulnerable APIs

Identifying APIs that use NoSQL databases often requires close scrutiny of the API’s behavior and responses.

One technique involves testing the API endpoint with different input types, including arrays and objects. NoSQL databases may respond uniquely to such inputs, providing a clue to their underlying technology.

Moreover, certain NoSQL databases like MongoDB use distinctive operators such as $ne or $regex, which can be used in payloads to identify their usage.

Specific error messages or unusual responses can also reveal the use of NoSQL databases. It’s not uncommon to see an error message that might leak data schema details or other sensitive information.

However, it is essential to approach this process cautiously, as probing live APIs can inadvertently disrupt their functionality or lead to unintentional security breaches.

Tools that can help

Several tools can aid in testing for NoSQL injection vulnerabilities:

  1. NoSQLMap: Designed as a pentesting tool, NoSQLMap helps identify and exploit NoSQL database vulnerabilities. It supports a variety of NoSQL databases and provides automated features.
  2. NoSQL Exploitation Framework: This framework, written in Python, enables penetration testers to exploit configuration and implementation flaws within NoSQL databases.
  3. NoSQL Attack Suite: This is a collection of NoSQL exploitation scripts that are made to automate attacks against NoSQL databases. The first can bypass logins, and the second can dump the NoSQL database.
  4. Nmap: Although primarily a network scanning tool, Nmap can detect NoSQL databases by using scripts such as ‘mongodb-databases‘ and ‘mongodb-info‘ in its scripting engine.
  5. Burp Suite: Burp Suite includes a scanner feature that can identify NoSQL injection vulnerabilities. There are also extensions like the NoSQLi Scanner that can do a more thorough evaluation.

The reality though is that tools can only do so much. When looking to bypass API auth, you can follow a few manual processes to quickly test for NoSQL injection.

Bypassing Authentication with NoSQL Injection

When looking at bypassing authentication in an API, you should start by finding the endpoint responsible for accepting and validating credentials during a user’s login attempt.

This will typically include a username and password as part of the credentials but may include additional details. Our testing will only focus on the username and password fields.

Crafting NoSQL injection payloads involves the creation of malicious data aimed at exploiting vulnerabilities in a NoSQL database. To bypass API authentication, you need to construct a payload that tricks the system into granting unauthorized access.

Such a payload might target the authentication mechanism by injecting specific operators or logic that manipulate how the system interprets queries. For instance, in MongoDB, you might utilize the ‘$ne‘ operator, equivalent to ‘!=‘, to alter the logic of the authentication query. Instead of the system checking for an exact match of username and password, the manipulated query might check for a username that is not equal to an arbitrary value, effectively sidestepping the need for a correct password.

Remember, the primary aim is to negate or bypass the standard security checks, but the specific payload will largely depend on the type of NoSQL database and its specific weaknesses.

Basic API auth bypass technique

I always recommend that you start by capturing a proper login attempt in Burp. Look at the body and determine the field names that represent the username and password. Send that request to the Repeater tab.

Let’s assume for a moment that those fields are named user and pass, respectively.

A basic bypass for a Content-Type of application/x-www-form-urlencoded might look like this:

If the login endpoint is accepting JSON, it might look something like this:

Why does this work?

Well, think about how the NoSQL query is constructed.

Consider some vulnerable nodeJS API code that takes the credentials directly:

But our malicious input modifies that behavior. It turns the expected string input into a query operator, ultimately changing the lookup to look something like:

How that now reads, it’s telling NoSQL, “Find me a username that doesn’t equal ‘fu’ and whose password is not ‘bar’.“. This is the essence of how to bypass API auth.

This will probably return the first user in the collection… which is usually the super user, or at the very least, the first user who probably has admin privileges.

Of course, if you know the user account you want to log in as you can modify the query accordingly. For a user named Bob, it might look like this:

See what I did there? Not only did I explicitly define the username I wanted, but I also told NoSQL I wanted to log into Bob’s account as long as his password was not null… which is a pretty good bet.

Conclusion

A comprehensive understanding of NoSQL injection is pivotal for API security in this digital era. The ability to bypass API auth using NoSQL injection demonstrates the vulnerability that exists when inputs directly influence query construction.

Just because it’s using NoSQL doesn’t mean there is no injection. It just works differently than SQL injection.

Manipulating the login payload to change the expected string into a query operator exposes a risky loophole. It leads to an unintended behavior, potentially returning access to accounts that may have higher levels of privilege than expected, thereby compromising system integrity.

So give it a try the next time you are conducting an API pentest. Check to see if NoSQL may be in use, and then try to taint the login payload with an operator to modify the authentication behavior to act differently.

You might be surprised by what you find. 😈

One last thing…

API Hacker Inner Circle

Have you joined The API Hacker Inner Circle yet? It’s my FREE weekly newsletter where I share articles like this, along with pro tips, industry insights, and community news that I don’t tend to share publicly. If you haven’t, subscribe at https://apihacker.blog.

The post The No-Nonsense Guide to Bypassing API Auth Using NoSQL Injection appeared first on Dana Epp’s Blog.

*** This is a Security Bloggers Network syndicated blog from Dana Epp's Blog authored by Dana Epp. Read the original post at: https://danaepp.com/bypassing-api-auth-using-nosql-injection

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.