Visual Studio 2022 v17.6 Preview 2: Productivity, Game Development and Enterprise Management

MMS Founder
MMS Almir Vuk

Article originally posted on InfoQ. Visit InfoQ

Last month Microsoft released the second preview of Visual Studio 2022 version 17.6 Preview, which comes packed with new productivity features and improvements aimed at enhancing the game development experience, mobile development, and enterprise management and it is available for download.

Preview 2 brings several new features for Visual Studio 2022 aimed at increasing developer productivity. The development team has responded to customer feedback with Git Stage and Commit During Build, allowing users to stage changes and commit them during a build. The Merge Dialog has also been updated, providing users with better insights into the impact of merge operations and warning them of potential conflicts. The introduction of breakpoint groups enables users to organize their debugging processes better, while Instrumentation Profiling for C++ provides performance analysis for that programming language. Additionally, the Create Member Function feature offers quick ways of adding constructors and equality operators to C++ codes using a three-dot and screwdriver icon inside of the code editor.

Visual Studio 17.6 Preview 2, is aimed at simplifying game development for both indie and AAA game creators. Among the improvements is the integration of Unreal Engine Code Analysis, which enables users to see warnings and errors from the Unreal Header Tool directly in Visual Studio. The feature emits warnings and errors while parsing Unreal-related C++ headers, which are displayed in the Error List and visually denoted by purple squiggles in the editor.

In addition, the popular HLSL Tools extension by Tim Jones is now available as part of Visual Studio, offering users improved productivity with syntax highlighting, statement completion, and go-to-definition. To use HLSL Tools, users must enable the component in the Game development with C++ or Game development with Unity workload in the Visual Studio Installer.

Regarding the .NET Mobile development, with the new Android Manifest Editor, developers can easily set available properties and request device-specific permissions by double-clicking on the AndroidManifest.xml file from the Solution Explorer. This feature is expected to simplify the process of developing Android apps, saving developers time and effort.

Regarding enterprise management, in the latest update, two new features have been introduced. One of the features allows companies to host and deploy Visual Studio layouts on an intranet website, in addition to file shares. This option can simplify layout maintenance and improve installation performance for organizations that use multiple global network file shares. This capability is currently targeted for IT administrators to remotely deploy from, and users can visit the feedback site to view guidance on how to enable this experience.

The other feature addresses the need to limit exposure to available products in the Installer. The installer Available tab now gives easy access to current previews, while also providing the ability to restrict exposure to certain products by disabling channels or using the new HideAvailableTab policy to disable the available tab overall.

In addition to the original release blog post, in the last paragraphs, Microsoft and the development team encourage users to provide feedback and share their suggestions for new features and improvements, emphasizing their commitment to constantly enhancing the Visual Studio experience.

Lastly, developers interested in learning more about this and other releases of Visual Studio can visit very detailed release notes about updates, changes, and new features around the Visual Studio 2022 IDE.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NoSQL Databases Software Market – A Comprehensive Study by Key Players:MongoDB …

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

A market study Global examines the performance of the NoSQL Databases Software 2023. It encloses an in-depth analysis of the NoSQL Databases Software state and the competitive landscape globally. The Global NoSQL Databases Software can be obtained through the market details such as growth drivers, latest developments, NoSQL Databases Software business strategies, regional study, and future market status. The report also covers information including NoSQL Databases Software industry latest opportunities and challenges along with the historical and NoSQL Databases Software future trends. It focuses on the NoSQL Databases Software dynamics that is constantly changing due to the technological advancements and socio-economic status.

Pivotal players studied in the NoSQL Databases Software report:

MongoDB, Amazon, ArangoDB, Azure Cosmos DB, Couchbase, MarkLogic, RethinkDB, CouchDB, SQL-RD, OrientDB, RavenDB, Redis

Get free copy of the NoSQL Databases Software report 2023: https://www.mraccuracyreports.com/report-sample/204170

Recent market study NoSQL Databases Software analyses the crucial factors of the NoSQL Databases Software based on present industry situations, market demands, business strategies adopted by NoSQL Databases Software players and their growth scenario. This report isolates the NoSQL Databases Software based on the key players, Type, Application and Regions. First of all, NoSQL Databases Software report will offer deep knowledge of company profile, its basic products and specification, generated revenue, production cost, whom to contact. The report covers forecast and analysis of NoSQL Databases Software on global and regional level.

COVID-19 Impact Analysis:

In this report, the pre- and post-COVID impact on the market growth and development is well depicted for better understanding of the NoSQL Databases Software based on the financial and industrial analysis. The COVID epidemic has affected a number of NoSQL Databases Software is no challenge. However, the dominating players of the Global NoSQL Databases Software are adamant to adopt new strategies and look for new funding resources to overcome the rising obstacles in the market growth.

Access full Report Description, TOC, Table of Figure, Chart, etc. https://www.mraccuracyreports.com/reportdetails/reportview/204170

Product types uploaded in the NoSQL Databases Software are:

Key applications of this report are:

Large Enterprises, SMEs

Report Attributes Report Details
Report Name NoSQL Databases Software Market Size Report
Market Size in 2020 USD xx Billion
Market Forecast in 2028 USD xx Billion
Compound Annual Growth Rate CAGR of xx%
Number of Pages 188
Forecast Units Value (USD Billion), and Volume (Units)
Key Companies Covered MongoDB, Amazon, ArangoDB, Azure Cosmos DB, Couchbase, MarkLogic, RethinkDB, CouchDB, SQL-RD, OrientDB, RavenDB, Redis
Segments Covered By Type,By end-user, And By Region
Regions Covered North America, Europe, Asia Pacific (APAC), Latin America, Middle East and Africa (MEA)
Countries Covered North America: U.S and Canada
Europe: Germany, Italy, Russia, U.K, Spain, France, Rest of Europe
APAC: China, Australia, Japan, India, South Korea, South East Asia, Rest of Asia Pacific
Latin America: Brazil, Argentina, Chile
The Middle East And Africa: South Africa, GCC, Rest of MEA
Base Year 2021
Historical Year 2016 to 2020
Forecast Year 2022 – 2030
Customization Scope Avail customized purchase options to meet your exact research needs.https://www.mraccuracyreports.com/report-sample/204170

Geographic region of the NoSQL Databases Software includes:

North America NoSQL Databases Software(United States, North American country and Mexico),
Europe Market(Germany, NoSQL Databases Software France Market, UK, Russia and Italy),
Asia-Pacific market (China, NoSQL Databases Software Japan and Korea market, Asian nation and Southeast Asia),
South America NoSQL Databases Software Regions inludes(Brazil, Argentina, Republic of Colombia etc.),
NoSQL Databases Software Africa (Saudi Arabian Peninsula, UAE, Egypt, Nigeria and South Africa)

The NoSQL Databases Software report provides the past, present and future NoSQL Databases Software industry Size, trends and the forecast information related to the expected NoSQL Databases Software sales revenue, growth, NoSQL Databases Software demand and supply scenario. Furthermore, the opportunities and the threats to the development of NoSQL Databases Software forecast period from 2023 to 2029.

Please click here today to buy full report @ https://www.mraccuracyreports.com/checkout/204170

Further, the NoSQL Databases Software report gives information on the company profile, market share and contact details along with value chain analysis of NoSQL Databases Software industry, NoSQL Databases Software industry rules and methodologies, circumstances driving the growth of the NoSQL Databases Software and compulsion blocking the growth. NoSQL Databases Software development scope and various business strategies are also mentioned in this report.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon GuardDuty Adds EKS Runtime Monitoring and RDS Protection

MMS Founder
MMS Matt Campbell

Article originally posted on InfoQ. Visit InfoQ

Amazon GuardDuty added Amazon EKS Runtime Monitoring and RDS Protection for Amazon Aurora. EKS Runtime Monitoring can detect runtime threats from over 30 different security findings. RDS Protection adds support for profiling and monitoring access activity to Aurora databases.

Amazon EKS Runtime Monitoring uses a fully managed EKS add-on to provide visibility into container runtime activities such as file access, process execution, and network connections. It can identify containers within an EKS cluster that are potentially compromised. This includes detecting attempts to escalate privileges from the container to the underlying EC2 host.

Findings generated cover crypto-mining, trojans, unauthorized access, privilege escalation, and attempts to bypass defenses. For example, the finding Trojan:Runtime/BlackholeTraffic!DNS notifies if a container is querying a domain name that is redirecting to a black hole IP address. 

DefenseEvasion:Runtime/FilelessExecution triggers if a container process is executing code from memory. While this can be a false positive, it is a technique used to avoid writing an executable to the disk where it might be detected.

Backdoor:Runtime/C&CActivity.B reports if a container is querying an IP that is tied to a known command and control server. If the IP is known to be log4j-related, the following fields will be set to these values in the finding:

service.additionalInfo.threatListName = Amazon
service.additionalInfo.threatName = Log4j Related

EKS Runtime Monitoring is not enabled by default but can be enabled and configured in the GuardDuty console. The service can be configured to automatically deploy and update the EKS-managed add-on for all existing and future EKS clusters. Enabling this option will also create the VPC endpoint for events to be delivered to GuardDuty.

This release builds on the previously released EKS Audit Log Monitoring. EKS Audit Log Monitoring analyzes Kubernetes audit logs directly from the EKS control plane through a duplicated log stream. Kubernetes audit logs capture user activities, applications using the Kubernetes API, and control plane actions.

EKS Runtime Monitoring makes use of runtime logs collected from the hosts. AWS notes that these logs can contain fields, such as file paths, that may have been altered by malicious actors. If the findings are being processed outside of GuardDuty all finding fields must be sanitized appropriately.

Other products in this space include the open-source runtime security tool, Falco. Falco’s recent release added support for updating rules at runtime and an experimental eBPF probe. Falco is a Cloud Native Computing Foundation (CNCF) incubated project.

GuardDuty RDS Protection for Amazon Aurora can detect threats such as high-severity brute force attacks, suspicious logins, and access by known threat actors. RDS Protection is enabled by default for new users to GuardDuty but must be enabled for current GuardDuty users. Enabling the service is done through the GuardDuty console.

The new threat detection services are available now within most regions that GuardDuty is available in. More details on Amazon GuardDuty can be found on the AWS site with pricing information on the pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


alrajhi Bank Rize, Malaysia’s Digital Bank, Bets the Farm on AWS – Data Storage Asean

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Amazon Web Services (AWS), an Amazon.com company, announced today that Al Rajhi Banking and Investment Corporation (Malaysia) Bhd (alrajhi bank Malaysia)’s digital bank, Rize, is going all-in on AWS.
 
alrajhi bank Malaysia, a subsidiary of Al Rajhi Bank, the world’s largest Islamic bank by assets, will migrate its key information technology (IT) infrastructure to AWS, the world’s leading cloud provider, by 2026. This follows alrajhi bank Malaysia’s selection of AWS as its sole cloud provider for its digital bank Rize, which was launched on 1 December 2022. alrajhi bank Malaysia is using the breadth and depth of AWS’ products and services, including containers, databases, and compute to deliver innovative financial services for its digital bank, Rize, which is available via a smartphone app that will meet consumers’ every banking need without the need to visit a physical branch. With this new digital bank, Rize’s customers—known as Rizers—will benefit from customized savings solutions and transaction monitoring features that improve transparency and promote better financial management.
 
Malaysians increasingly demand faster and easier online banking experiences to better manage their financial well-being, with 82% banking on smartphones in an economy that was expected to be worth US$21 billion by the end of 2022. To meet this growing demand, alrajhi bank Malaysia required an agile, scalable, resilient, and secure cloud environment to meet local regulatory requirements, while rapidly developing and implementing a user-friendly digital bank for consumers and businesses. With Amazon Elastic Kubernetes Service (Amazon EKS), which gives customers the flexibility to start, run, and scale Kubernetes applications on AWS, alrajhi bank Malaysia is building digital banking products for Rize using microservices. With Amazon DynamoDB, a fast and flexible NoSQL database service, and Amazon Relational Database Service (Amazon RDS), a fully managed, open-source cloud database service, alrajhi bank Malaysia can easily use customer datasets to enable more personalized services, like payment services, as it evolves Rize.
 
“With AWS, we were able to quickly build Rize, a mobile-first, a highly scalable digital bank offering a seamless customer experience, within 12 months from conception to reality,” said Arsalaan (Oz) Ahmed, chief executive officer at alrajhi bank Malaysia. “AWS gives us the agility and confidence to deliver new innovative banking services with Rize that meet local regulatory requirements, helping Rizers better manage their finances.”
 
“Malaysia’s banking industry is digitizing rapidly, and with the issuance of digital banking licenses in 2022, AWS is helping more local organizations accelerate the pace of innovation using cloud technology to deliver financial services to Malaysians. With AWS, alrajhi bank Malaysia is meeting the rising demand for mobile banking solutions that drive financial inclusion while improving customers’ financial well-being,” said Conor McNamara, managing director at AWS in ASEAN. “We are excited to continue supporting alrajhi bank Malaysia’s ambition to become the #1 Islamic innovation bank in Malaysia, and to deliver engaging and rewarding financial services to its customers.”

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


New Ways to Bring MongoDB Data, Apps to Azure Cosmos DB – TechRepublic

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

The Microsoft Azure logo on a computer.
Image: PhotoGranary/Adobe Stock

Branded as Microsoft’s “planetary scale database,” Cosmos DB is one of Azure’s foundational services, powering its own applications as well as yours. Designed for distributed application development, Cosmos DB offers a range of consistency models and, more interestingly, a series of different personalities that allow you to use it like any one of a set of familiar databases.

These include a PostgreSQL-compatible API, a graph database like Neo4j, and its own document database model, as well as support for the familiar Apache Cassandra distributed database. One of the more popular personalities is a set of APIs that aim to offer most of the features of the popular MongoDB NoSQL database. This last option is an interesting one, as it allows you to quickly take existing on-premises modern applications and quickly bring them to the cloud, ready for re-architecting on a global scale.

Jump to:

Understanding Request Units billing costs in Cosmos DB

There’s one issue that often confuses developers coming from a more traditional development environment: Cosmos DB uses the concept of Request Units to handle billing, in addition to Azure’s standard storage charges.

An RU is a way of describing and charging for how a database like Cosmos DB uses Azure resources. It brings together compute, I/O and memory, using the resources to make a 1KB read of a single item as the base of what can best be thought of as Cosmos DB’s own internal currency.

With a single read of a single item measured as 1 RU, all other operations are billed in a similar way, bundling their actions and resource usage as a value in RU. You purchase bundles of RU that are then spent in database operations, much like buying tokens for a game like Roblox. RU can be used to manage operations, with a set number per second available for your operations or used to pay for serverless operations. You can also use RUs to allow your database to scale as needed, though this does mean a particularly busy application can suddenly become very expensive to run.

The RU model, while logical for a cloud-native service, makes it hard for you to predict the cost of running Cosmos DB if you’re used to traditional costing models. While it’s possible to build tools to help predict costs, you must account for more than just the operations your database uses, as the type of consistency model you choose will affect the available throughput.

Introducing vCores to Cosmos DB

Microsoft is now offering an alternative to the RU model for developers bringing their MongoDB-based applications to Cosmos DB. Instead of paying for RUs and storage, you can now choose to focus on the more familiar mix of application instances and assigned disks. This gives you access to a model that’s a lot closer to MongoDB’s managed Atlas cloud service, allowing a more predictable migration from on premises or other clouds.

Available as Azure Cosmos DB for MongoDB vCore, this new release of Microsoft’s NoSQL database is a full-fledged part of your Azure infrastructure that gives you automated sharding and integration with Azure’s Command-Line Interface and other management tooling.

Microsoft describes it as a way to “modernize MongoDB with a familiar architecture.” The aim is to deliver as close as possible a set of compatible APIs, while still offering scalability. For example, Microsoft told us,

“Azure Cosmos DB for MongoDB vCore enables richer, more complex database queries such as the full-text searches that power cloud-based chatbots.”

Moving applications from MongoDB to Cosmos DB

If you have code using MongoDB’s query language to work with your data, it should work as before, with the main requirement being to change any endpoints to the appropriate Azure address.

However, not all commands are available on Cosmos DB, as the underlying features don’t map between the two databases. It’s worth paying attention to the list of supported commands, especially if you’re relying on MongoDB’s session control tooling, as much of this isn’t currently available in Cosmos DB. You’ll also have to switch any authentication to Azure’s native tooling.

Moving data between the two should be easy enough, as MongoDB’s export and import tools allow you to save data as either JSON for partial exports or the more compact BSON for a full database. If you’re moving a lot of data as JSON, this can be expensive, as you’ll be charged for data transfers.

Pricing is based on standard Azure virtual infrastructure, using either high availability or lower availability systems. If you don’t need an HA system, then you can save up to 50% on the HA pricing. Base storage for a vCore Cosmos DB system is 128GB, which should be suitable for many common workloads. You can choose to start with two vCPUs and 8GB of RAM and scale up to 32 with 128GB of RAM.

While most applications will work with little modification, like the RU version, the vCore release of Cosmos DB’s MongoDB support does have some differences from the official APIs. We asked Microsoft if there were plans to add more coverage in future releases, beyond the shift to vCore over serverless.

“In most scenarios, this makes the two technologies entirely compatible. Based on customer feedback, one of the larger pain points regarding compatibility between MongoDB and Azure Cosmos DB was the need to re-engineer and reshape their MongoDB databases to fit with how Azure Cosmos DB is architected. This release eliminates that pain point as the two databases are now essentially the same ‘shape.’ In addition, we have strong feature compatibility between the two and will continue to roll out more features as this moves out of preview and into general availability,” a spokesperson responded.

This new MongoDB option should make it easier to bring a MongoDB workload you’ve already written to Cosmos DB and thereby free yourself from having to run your own MongoDB infrastructure — or let you consolidate on using Cosmos DB as your cloud database, bringing databases from other cloud providers to Azure, where you can use all the other Azure resources and services that smaller providers like MongoDB don’t offer.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI Announces GPT-4, Their Next Generation GPT Model

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

OpenAI recently announced GPT-4, the next generation of their GPT family of large language models (LLM). GPT-4 can accept both text and image inputs and outperforms state-of-the-art systems on several natural language processing (NLP) benchmarks. The model also scored in the 90th percentile on a simulated bar exam.

OpenAI’s president and co-founder, Greg Brockman, demonstrated the model’s capabilities in a recent livestream. The model was trained using the same infrastructure as the previous generation model, GPT-3.5, and like ChatGPT it has been fine-tuned using reinforcement learning from human feedback (RLHF). However, GPT-4 features several improvements over the previous generation. Besides the ability to handle image input, the default context length has doubled, from 4,096 tokens to 8,192. There is also a limited-access version that supports 32,768 tokens, which is approximately 50 pages of text. The model’s response behavior is more steerable via a system prompt. The model also has fewer hallucinations than GPT-3.5, when measured on benchmarks like TruthfulQA. According to OpenAI:

We look forward to GPT-4 becoming a valuable tool in improving people’s lives by powering many applications. There’s still a lot of work to do, and we look forward to improving this model through the collective efforts of the community building on top of, exploring, and contributing to the model.

Although OpenAI has not released details of the model architecture or training dataset, they did publish a technical report showing its results on several benchmarks, as well as a high level overview of their efforts to identify and mitigate the model’s risk of producing harmful output. Because fully training the model requires so much computation power and time, they also developed techniques to predict the model’s final performance, given performance data for smaller models. According to OpenAI, this will “improve decisions around alignment, safety, and deployment.” 

To help evaluate their models, OpenAI has open-sourced Evals, a framework for benchmarking LLMs. The benchmark examples or evals typically consist of prompt inputs to the LLM along with expected responses. The repo already contains several eval suites, including some implementations of existing benchmarks such as MMLU, as well as other suites where GPT-4 does not perform well, such as logic puzzles. OpenAI says they will use the Evals framework to track performance when new model versions are released; they also intend to use the framework to help guide their future development of model capabilities.

Several users discussed GPT-4 in a thread on Hacker News. One commenter said:

After watching the demos I’m convinced that the new context length will have the biggest impact. The ability to dump 32k tokens into a prompt (25,000 words) seems like it will drastically expand the reasoning capability and number of use cases. A doctor can put an entire patient’s medical history in the prompt, a lawyer an entire case history, etc….What [percentage] of people can hold 25,000 words worth of information in their heads, while effectively reasoning with and manipulating it?

However, several other users pointed out that medical and legal applications would require better data privacy guarantees from OpenAI. Some suggested that a homomorphic encryption scheme, where the GPT model operates on encrypted input, might be one solution.

Developers interested in using the model can join OpenAI’s waitlist for granting access.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Oracle gives developers free access to forthcoming database release – SiliconANGLE

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Oracle Corp. is making the next version of its flagship database management system available free to developers under a new program announced today.

Oracle Database 23c Free—Developer Release is available for download as a Docker container image, Oracle VirtualBox virtual machine, or Linux RPM installation file without requiring a user account or login. A Windows version is planned for the near future.

Oracle 23c, announced last October, features JSON Relational Duality, a new capability that unifies relational and document data models into a single schema. Document stores are one of the most common types of NoSQL databases and JavaScript Object Notation is a widely used data format. Document databases are prized for their flexibility, but most don’t support the atomicity, consistency, isolation and durability features of the more structured relational model. The new release also allows SQL to be used for graph queries directly on transactional data and stored procedures to be written in JavaScript.

Bridging a gap

Relational duality “ends the relational versus document database debate to deliver the best of both worlds,” said Gerald Venzl, senior director of product management at Oracle. The capability allows developers to build applications using either relational or JSON constructs with access to both types of data stores. Data is held once but can be accessed, written and modified with either approach. Transactions are ACID-compliant and have concurrency controls, which eliminates tradeoffs in object-relational mappings and data consistency.

“We have delivered on our mission to make modern applications and analytics easy to develop and run for all use cases at any scale for quite a long time with the converged database approach,” Venzl said. “However, you still had to make a decision about whether to treat data as a document store or relational. If you decide it’s a document store, you can no longer query the relational database.”

A nod to developers

This isn’t the first time Oracle has given away its flagship software for free. It introduced the cloud-based Always Free Autonomous Database in 2019 and is now addressing what executives said is substantial demand for an on-premises version.

“We’ve also seen that developers like to develop on their laptops and check in code when they can,” Venzl said. “Even after 15 years of cloud, we still see that the top database technologies are on-premises.”

The release of the free developer edition is a nod to the “shift in power in IT over last 10 or 15 years from IT operations to developers,” je said. “Operations used to dictate technology choices to developers. Today, developers decide the technology stack based on time to market and competitive advantage.”

Venzl emphasized that Oracle is seeking to eliminate as many barriers to developer usage as possible. There’s no need for users to sign up for an account in order to download the software. The free version has storage, memory and processor constraints that limit it to small applications and doesn’t include Oracle support.

Expanded JSON support

Other features in the new edition include the ability for developers to ensure and validate JSON document structures via structured JSON Schemas. Developers can now build both transactional and analytical property graph applications with the Oracle database using the new SQL standard property graph queries. That enables graph analytics to be run on top of both relational and JSON data. Oracle has supported graph constructs for more than 20 years, Venzl said.

Applications that use the Apache Kafka distributed event streaming platform can now run against Oracle transactional event queues in Oracle with minimal code changes.

A new SQL domain construct can act as lightweight type modifiers that centrally document intended data usage, extending and improving upon SQL standard domains. Those are data types with optional constraints that are used to abstract common constraints on fields into a single location for simpler definition and maintenance. “This takes away needs for stored procedures and checks,” Venzl said. “The application knows what the data type is and can run checks on it.”

Database metadata can now be stored directly alongside the data with a new annotation mechanism inside the Oracle database. Developers can annotate common data model attributes for tables, columns, views, indexes and other attributes to improve consistency and accessibility.

Photo: Flickr CC

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Announces Preview of AlloyDB Omni: Run a PostgreSQL-Compatible Database Anywhere

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Google recently announced the preview of AlloyDB Omni, a downloadable edition of AlloyDB designed to run on-premises, at the edge, across clouds, or even on developer laptops.

AlloyDB Omni is powered by the same engine that underlies the cloud-based AlloyDB service, a fully-managed, PostgreSQL-compatible database service that the company released into general availability last year. Google claims that AlloyDB Omni is more than two times faster than standard PostgreSQL for transactional workloads and delivers up to 100x faster analytical queries than standard PostgreSQL.

Furthermore, Omni utilizes AlloyDB’s columnar engine to minimize latency for query results by storing frequently accessed data in an in-memory columnar format. This enables faster scanning, joining, and aggregating of data. Additionally, the system employs machine learning to automatically organize data between row-based and columnar formats, switch between execution plans, and convert data formats as required.

In addition, AlloyDB Omni has an index advisor to optimize frequently run queries. Andi Gutmans, GM and VP of engineering for databases at Google Cloud, and Gurmeet Goindi, director of product management, explain the index advisor in a Google blog post:

The AlloyDB Omni index advisor helps alleviate the guesswork of tuning query performance by conducting a deep analysis of the different parts of a query, including subqueries, joins, and filters. It periodically analyzes the database workload, identifies queries that can benefit from indexes, and recommends new indexes that can increase query performance.

Source: https://cloud.google.com/blog/products/databases/run-alloydb-anywhere/

With regards to the index advisor, Gleb Otochkin, a cloud advocate at Google, tweeted:

Running my test AlloyDB in a VirtualBox VM on my mac with only 6Gb mem allocated to the VM. But it seems like the index advisor works correctly. Query exec time reduced from 18ms to less than 3ms.

Companies can use AlloyDB Omni as a drop-in alternative to PostgreSQL and use all the tools compatible with PostgreSQL to back up and replicate their databases. In addition, the service is fully compatible with any PostgreSQL-enabled applications that businesses may already be using.

A respondent, TheTao, on a HackerNews thread, questioned the use of AlloyDB Omni:

Why would anyone use this as opposed to using Postgres? The value prop of run-anywhere applies to Postgres as well. I see column store and index advisor as the two features, but if I don’t need these, is there any reason?

Another Gabe Weiss responded:

It’s twice as fast as out-of-the-box PG for most things and up to 100x faster for reads, depending on what you’re doing. So, there’s that.

Also, from manageability, on top of the index advisor, there’s also vacuum management, so it will figure out when the best time to do the garbage clean-up while minimizing the impact on performance.

Lastly, more details are available on the documentation pages, and access to the preview is possible via a signup form. Furthermore, Google will offer full enterprise support, including 24/7 technical support, once AlloyDB Omni becomes generally available.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Assessing Organizational Culture to Drive SRE Adoption 

MMS Founder
MMS Vladyslav Ukis

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • SRE adoption is greatly influenced by the organizational culture at hand. Therefore, assessing the organizational culture is an important step to be done at the beginning of an SRE transformation.
  • The Westrum model of organizational cultures can be used to assess an organization’s culture from the production operations point of view. The six aspects of the model – cooperation level, messenger training, risk sharing, bridging, failure handling and novelty implementation – relate directly to SRE.
  • Westrum’s performance-oriented generative cultural substrate turned out to be a fertile ground for driving SRE adoption and achieving high performance in SRE.
  • Subtle culture changes in the teams during SRE adoption accumulate to a bigger organizational culture change where production operations is viewed as a collective responsibility because different roles in different teams are aligned on operational concerns.
  • Both formal and informal leadership need to work together to achieve the SRE culture change providing consistency, steadiness and stability amidst the very dynamic nature of the change at hand.

Introduction

The teamplay digital health platform and applications at Siemens Healthineers is a large distributed organization consisting of 25 teams owning many different digital services in the healthcare domain.

The organization underwent an SRE transformation, a profound sociotechnical change that switched the technology, process and culture of production operations. In this article, we focus on:

  • How the organizational culture was assessed in terms of production operations at the beginning of the SRE transformation
  • How a roadmap of small culture changes accumulating over time was created, and
  • How the leadership facilitated the necessary culture changes

The need to assess the organizational culture

When it comes to introducing SRE, it is easy to jump into the tech part of the change and start working on implementing new tools, infrastructure and dashboards.

Undoubtedly necessary, these artifacts alone are not sufficient to sway an organization’s approach to production operations. An SRE transformation is profoundly a sociotechnical change.

The “socio” part of the change needs to play an equal role from the beginning of the SRE transformation.

In this context, it is useful to assess the organization’s current culture, viewing it from the lens of production operations. This holds the following benefits:

  • a) It enables the SRE coaches driving the transformation to understand current attitudes towards production operations in the organization
  • b) It reveals subtle, sometimes hardly visible, ways the organization operates in terms of information sharing, decision-making, collaboration, learning and others that might speed up of impede the SRE transformation
  • c) It sparks ideas about how the organization might be evolved towards SRE and enables first projections of how fast the evolution might go

Given these benefits, how to assess the organizational culture from the production operations point of view? This is the subject of the next section.

How to assess the organizational culture?

A popular topology of organizational cultures is the so-called Westrum model by Ron Westrum. The model classifies cultures as pathological, bureaucratic or generative depending on how organizations process information:

  • Pathological cultures are power-oriented
  • Bureaucratic cultures are rule-oriented, and
  • Generative cultures are performance-oriented

Based on the Westrum model, Google’s DevOps Research and Assessment (DORA) program found out through rigorous studies that generative cultures lead to high performance in software delivery. According to the Westrum model, the six aspects of the generative high performance culture are:

  1. High cooperation
  2. Messengers are trained
  3. Risks are shared
  4. Bridging is encouraged
  5. Failure leads to inquiry
  6. Novelty is implemented

These six aspects can be used to assess an organization’s operations culture. To approach this, the six aspects need to be mapped to SRE in order to understand the target state of culture. The table below, based on my book “Establishing SRE Foundations“, provides this mapping.

  Westrum’s generative culture Relationship to SRE
1. High cooperation SRE aligns the organization on operational concerns. This is only possible if a high cooperation is established between the product operations, product development and product management. Executives cooperate with the software delivery organization by supporting SRE as the primary operations methodology. This is necessary to achieve standardization leading to economies of scale justifying investment in SRE.
2. Messengers are trained SRE quantifies reliability using SLOs. Once corresponding error budgets are exhausted, the teams owning the services are trained on how to improve reliability. Moreover, the people on-call are trained to be effective at being on-call, which includes acting quickly to reduce the error budget depletion during outages. Postmortems after outages are viewed as a learning opportunity for the organization.
3. Risks are shared Product operations, product development and product management agree on SLIs that represent service reliability well from the user point of view, on SLOs that represent a threshold of good reliability UX and on the on-call setup required to run the services within the defined SLOs. This leads to shared-decision making on when to invest in reliability vs. new features to maximize delivered value. Thus, the risks of the investments are shared.
4. Bridging is encouraged SLO and SLA definitions are public in the organization, so is the SLO and SLA adherence data per service over time. This leads to data-driven reliability conversations among teams about reliability of dependent services. An SRE community of practice (CoP) is cross-pollinating SRE best practices among the teams and organizing organization-wide lunch & learn sessions on reliability.
5. Failure leads to inquiry Postmortems after outages are used for blameless inquiry into what happened with a view to generate useful learnings to be spread throughout the organization.
6. Novelty is implemented New insights from ongoing product operations, outages and postmortems lead to a timely implementation of new reliability features prioritized against all other work according to error budget depletion rates.

With the target culture state defined in the table above, the SRE coaches can analyze how far away from it their organization currently is.

Accumulating small culture changes over time

When the SRE coaches understood the status quo, we began the SRE transformation activities. These will include technical, process and behavior changes. To fuel the movement, the SRE coaches need to look for small behavior changes, celebrate them and stagger them in such a way that they accumulate over time.

For example, the following order of small changes can incrementally lead to bigger behavior changes over time pushing the culture more and more toward the target state outlined in the previous section.

#   Change Culture impact  Culture impact accumulation over time
1 Putting SRE on the list of bigger initiatives the organization works on Awareness of SRE and its promise at all levels of the organization Acceptance of potential usefulness of SRE, open-mindedness to SRE
2 Establishing SRE coaches Perception of SRE as a serious bigger initiative being driven by dedicated responsibles throughout the organization SRE go-to people are known in the organization
3 Setting initial SLOs The first reliability quantification is undertaken; new thinking of reliability as something being quantified is induced SRE has its concepts. The central concept of SLO is now something we define for our services
4 Reacting to alerts on SLO breaches Developers no longer do only coding but also spend time monitoring their services in production Breaching the defined SLOs leads to alerts that developers spend time analyzing. Thus, the SLOs need to be very carefully designed to reflect the customer experience. Lots of SLO breaches lead to lots of time being spent on their analysis!
5 Setting up alert escalation policies  An SLO breach alert is so significant that it must reach someone who can react to it  Reaction to an SLO breach needs to happen in a timely manner, otherwise an escalation policy kicks in!
6 Implementing incident classification Incidents need classification to drive appropriate mobilization of people in the organization Mobilizing people to troubleshoot an incident happens depending on the incident classification
7 Implementing incident postmortems Incidents warrant spending time on understanding what really happened, why and how to avoid the same incident from happening again in future Incidents do not just come and go. Rather, they are carefully analyzed after being solved, inducing a learning cycle into the organization
8 Setting up error budget policies Error budget consumption is tracked. Once it hits a certain threshold, it becomes subject to a predefined policy of action Lots of SLO breaches can accumulate to significant error budget consumption. There is a policy to ensure the error budget consumption does not exceed some thresholds
9 Setting up error budget-based decision-making Prioritization decisions about reliability are based on data from production tracking the error budget consumption over time Different people at different levels of the organization use the error budget consumption data to steer reliability investments
10 Implementing organizational structure for SRE SRE is so widely established in the organization that a formal structure with roles, responsibilities and organizational units is established  SRE is a standard operations methodology now that is even reflected in organizational structure and processes

The culture changes outlined in the table above are driven using an interplay of formal and informal leadership. These dynamics are described in the next section.

Interplay of formal and informal leadership

In every hierarchical organization, there are leaders who possess formal authority due to their placement in the organizational chart. If these leaders are trusted by the broader organization, they enjoy a multiplication effect on their efforts thanks to a large following of people in the organization.

At the same time, lots of hierarchical organizations have informal leaders who do not possess formal authority because they do not have a prominent place in the organizational chart. They have, however, earned trust from the overall organization. This trust enables them to also enjoy a multiplication effect on their efforts because a large number of people in the organization follow them voluntarily.

In the table below, formal and informal leadership types are summarized.

  Supporting the SRE transformation Detrimental to SRE transformation Supporting the SRE transformation
Leadership type  Formal leadership enjoying trust from the organization Formal leadership without trust from the organization Informal leadership enjoying trust from the organization
Following type A large following of people in the organization, which is both voluntary and authority-based A following of people based on formal authority A large voluntary following of people in the organization

 A good combination of leadership described on the very left and very right columns provides the necessary environment to push SRE through the organization appropriately proportionally in the top-down and bottom-up manner. It caters for required consistency, steadiness and stability in the very dynamic nature of the SRE transformation. The teams feel that the formal leadership supports SRE while informal leaders help drive the necessary mindset, technical and process changes throughout the organization. This maximizes the chances of success for the SRE transformation.

From the trenches

The culture assessment method described above helped the Siemens Healthineers digital health platform organization successfully evolve operations towards SRE. In this section, we present a few real learnings from the trenches of our SRE transformation.

Learning 1: Involve the product owners from the beginning

One of the most profound things we got right was to involve the product owners in the SRE transformation from the beginning. The SRE value promise for the product owners is to reduce customer escalations they might experience due to the digital services not working as expected. The escalations are annoying, time-consuming and causing unwanted management attention. This provides motivation to the product owners to attend SRE meetings where the SLOs are defined and associated processes are discussed.

The product owners in SRE meetings:

  • Provided context of the most important customer journeys from the business point of view
  • Assessed the business value of higher reliability at the cost discussed in the meetings
  • Got closer to production operations by being involved in SRE discussions from the start
  • Developed an understanding of how to prioritize investments in reliability vs. features in a data-driven way

Learning 2: Get the developers’ attention onto production first

The major problem with organizations new to software as a service is that developers are not used to paying attention to production. Rather, traditionally their world starts with a feature description and ends with a feature implementation. Running the feature in production is out of scope. This was the case with our organization at the beginning of the SRE transformation.

In this context, the most important impactful milestone to achieve at the beginning of the SRE transformation was to channel the developers’ attention onto production. This was an 80/20 kind of milestone, where 20% of the effort yields 80% improvement.

It was less important to get the developers to be perfect about their SLO definitions, error budget policy specifications, etc. Rather, it was about supplying the developers with the very basic tools and the initial motivation to move their attention to production. Regularly spending time in some production analyses was half the battle when acquiring the new habit of operating software.

Once there, the accuracy of applying the SRE methodology could be brought about step by step.

Learning 3: Do not fear letting the team fail fast at first

When it comes to the initial SLO definitions, our experience was that teams tended to overestimate the reliability of their services at first. They tended to set higher availability SLOs than the services have on average. Likewise, they tended to set stricter latency SLOs than the service can fulfill.

Convincing the teams at this initial stage to relax the initial SLOs was futile. Even the historical data sometimes did not convince the teams. We found that a fail fast approach was actually working best.

We set the SLOs as suggested by the teams, without much debate. Unsurprisingly, the teams got flooded with alerts on SLO breaches. Inevitably, the big topic of the next SRE meeting was the sheer number of alerts the team cannot process.

This made the team fully understand the consequences of their SLO decisions. In turn, the SLO redefinition process got started. And this was exactly what was needed: a powerful feedback loop from production on whether the services fulfill the SLOs or not, leading to a reevaluation of the SLOs.

Learning 4: Build a coalition of formal and informal leaders

We found it very useful to have a coalition of formal and informal leaders championing SRE in the organization. The informal leaders were self-taught about SRE and bursting with energy to introduce it in the organization. To do so, they required support from the formal leadership to commit capacity in the teams for SRE work.

The informal leaders needed to sell SRE to the formal leaders on the promise of reducing customer escalations due to service outages. These conversations happened with the head of R&D and head of operations. In turn, these leaders needed to sell SRE to the entire leadership team so that the topic gets put onto a portfolio list of big initiatives undertaken by the organization.

Once that happened, there was a powerful combination of enough formal leaders supporting SRE, SRE being on the list of big initiatives undertaken by the organization and an energized group of informal leaders ready to drive SRE throughout the organization.

This organizational state was conducive to achieving successful production operations using SRE!

Summary

An SRE transformation is a large sociotechnical change for a software delivery organization that is new to or just getting started with digital services operations. The speed of the change is largely determined by the organizational culture at hand. It is people’s attitudes to and views about production operations that are the highest mountains to move, not the tools and dashboards used by the people on a daily basis.

Therefore, assessing the organizational culture before embarking on the SRE transformation is a useful exercise. It enables the SRE coaches driving the transformation to understand where the organization currently is in terms of operations culture. It further ignites a valuable thinking process of how it might be possible to evolve the culture towards SRE.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Where Will Mongodb Inc (MDB) Stock Go Next After It Is Up 7.94% in a Week?

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

News Home

Monday, April 03, 2023 11:07 AM | InvestorsObserver Analysts

Mentioned in this article

Where Will Mongodb Inc (MDB) Stock Go Next After It Is Up 7.94% in a Week?

Mongodb Inc (MDB) stock has risen 7.94% over the past week and gets a Bullish rating from InvestorsObserver Sentiment Indicator.

Sentiment Score - ,bullish
Mongodb Inc has a Bullish sentiment reading. Find out what this means for you and get the rest of the rankings on MDB!

What is Stock Sentiment?

In investing, sentiment generally means whether or not a given security is in favor with investors. It is typically a pretty short-term metric that relies entirely on technical analysis. That means it doesn’t incorporate anything to do with the health or profitability of the underlying company.

Price action is generally the best indicator of sentiment. For a stock to go up, investors must feel good about it. Similarly, a stock that is in a downtrend must be out of favor.

InvestorsObserver’s Sentiment Indicator considers price action and recent trends in volume. Increasing volumes often mean that a trend is strengthening, while decreasing volumes can signal that a reversal could come soon.

The options market is another place to get signals about sentiment. Since options allow investors to place bets on the price of a stock, we consider the ratio of calls and puts for stocks where options are available.

What’s Happening With MDB Stock Today?

Mongodb Inc (MDB) stock is trading at $227.38 as of 11:02 AM on Monday, Apr 3, a decline of -$5.74, or -2.46% from the previous closing price of $233.12. The stock has traded between $227.00 and $232.15 so far today. Volume today is less active than usual. So far 655,560 shares have traded compared to average volume of 1,807,625 shares.

More About Mongodb Inc

Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users. MongoDB provides both licenses as well as subscriptions as a service for its NoSQL database. MongoDB’s database is compatible with all major programming languages and is capable of being deployed for a variety of use cases.

Click Here to get the full Stock Report for Mongodb Inc stock.

You May Also Like

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.