Month: May 2023
MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ
In the cloud, application development can be treated end-to-end with its accompanying infrastructure. This makes it possible to use test-driven development (TDD) and refactoring on the full application, which can bring down maintenance costs.
Michal Svoboda will give a talk about test-driven development of cloud apps at XP 2023. This conference is held June 13-16 in Amsterdam, the Netherlands.
For cloud apps, applications can be developed and deployed along with their accompanying infrastructure as one coherent piece of code. According to Svoboda, removing the “infrastructure” as a separate element allows us to apply agile engineering techniques such as TDD and refactoring on the scope of the whole app, including its cloud resources.
The latency and asynchronous nature of the cloud can be a problem. Waiting for resources to provision or timeouts to elapse obstructs the rapid TDD cycle. Svoboda suggests switching to an incremental update model, i.e. to not destroy resources at the end of each test, and to have a clean deployment when integrating only:
Test speedup techniques were drawn from our TDD bag of tricks. Tactically using state-based tests or testing only modified parts of the code would be a few examples. It pays to remember that hurdles in testing provide useful feedback to the whole development cycle. This feedback made us weigh carefully our architecture and procedural choices.
According to Svoboda, TDD brings down the app maintenance costs which are by far the major part of software TCO. Using TDD, it is easy to add features or refactor anywhere, be it your own code or use of the cloud resources, even years later.
InfoQ interviewed Michal Svoboda about cloud development using TTD.
InfoQ: How has the cloud impacted the way we provision infrastructure?
Michal Svoboda: Through APIs, cloud resources can be created and destroyed in a fully automated way. (Strictly speaking, this isn’t just the cloud. Cloud providers just make this function extremely accessible.) We don’t need to think about “infrastructure”, as in servers and networks that exist independently of applications. “Infrastructure” no longer requires a special approach.
On top of the classic infrastructure, the cloud provides single-specialty services, such as storage, functions, or streams. Many cloud apps don’t just run in the cloud, they consist of the cloud.
InfoQ: How do you do test-driven development for cloud applications?
Svoboda: TDD of cloud apps is similar to TDD of other apps. Instead of calling constructors and functions to create objects in memory, we call APIs to create resources in the cloud. An “arrange, act, assert” test for a stream resource is illustrated in pseudo-code below:
[Test that stream can be written and read]
- Deploy stream and read/write roles
- Put data to stream using writer role
- Poll stream using reader role, asserting the correct object is received or timeout
- Remove stream and roles
This is a very simple functional test. State-based tests can be performed using API calls that “query configuration” of resources. More complex setups of resources can be tested using the same principle.
As per TDD, we let the test be written first, fail, and follow with the implementation. Importantly, we listen to feedback and let any difficulties of testing drive our development. Our technology, architecture and procedural choices are based on ease of testing.
InfoQ: What challenges have you faced and how did you deal with them?
Svoboda: Available tooling was a problem. For this TDD on cloud approach to work well, resource deployment code must be first-class citizen in the programming language of choice. Contemporary tools provide command-line interfaces over a model in their own languages in a “cloud Makefile” fashion. Because these tools follow the “separate infrastructure” paradigm, it can get cumbersome to communicate with them. This was great feedback as well early in our development and steered our tooling and provider decisions.
InfoQ: Besides lower costs, what benefits did you get from doing TDD for cloud apps?
Svoboda: The tests make it possible to account for edge cases. The applications are stable and we know what to expect. We even worked out a few rough edges with our cloud provider!
Because our approach made it very easy to set up and test resources, we benefited in the prototyping phase as well. The cloud is a complex environment and we failed more times than I can count due to programming errors and wrong functionality assumptions. Using this approach, we failed fast.
Many important questions were pragmatically answered early in the development. What technologies will we use? How will we deploy and operate our application? How will we manage long-term state and sensitive data?
InfoQ: What’s your advice to people who want to try out TDD for their cloud application?
Svoboda: Start slow with a smaller-scale project first. There are a few things that will need bootstrapping before the first test can pass. Get used to the mechanics. Make sure to refactor aggressively. Learn from the feedback. Good luck!
MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
Microsoft recently announced the public preview of Azure Container Storage, a volume management service built natively for containers.
Azure Container Storage provides a consistent management experience across different storage offerings, including a managed option (backed by Azure Elastic SAN), Azure Disks, and ephemeral disks on container services – intended to simplify the deployment of persistent volumes. Previously, customers had to use individual container storage interface (CSI) drivers to offer cloud storage for containers, causing various operational issues regarding application availability, performance, cost, usability, and stability.
With Azure Container Storage, customers can now easily create and manage block storage volumes for production-scale stateful container applications and run them on Kubernetes, ensuring consistent experiences across different environments. In addition, it is optimized to enhance the performance of stateful workloads on Azure Kubernetes Service (AKS) clusters by accelerating the deployment of stateful containers with persistent volumes and improving quality with reduced pod failover time through fast attach/detach.
In an Azure blog post, the authors explain that the service aligns with open-source container-native storage approaches:
Azure Container Storage runs microservices-based storage controllers in Kubernetes to abstract the storage management layer from pods and backing storage, enabling portability across Kubernetes nodes and the ability to mount different storage options.
Azure Container Storage components include:
- A storage pool: A collection of storage resources grouped and presented as a unified storage entity for your AKS cluster.
- A data services layer: Responsible for replication, encryption, and other add-on functionality absent in the underlying storage provider.
- A protocol layer: Exposing provisioned volumes via NVMe-oF protocol to application pods.
Some of the other key benefits of the service besides the consistent management experience are:
- Cost optimization: Azure Container Storage allows for dynamic sharing of provisioned resources on a Storage Pool, optimizing storage utilization and price-performance, resulting in a projected 20% total cost of ownership (TCO) saving when running a stateful Kubernetes cluster on Azure with AKS.
- Easy scaling: Azure Container Storage provides the ability to quickly scale storage according to customers’ application needs, with optimized latencies for PV creation and increased scalability limits, allowing for a more significant number of PVs to be attached to a pod, providing more flexibility in designing your application architecture without storage limitations, even for pods hosted on small AKS nodes.
- Integration with Kubernetes: Azure Container Storage offers seamless integration with Kubernetes, allowing users to leverage familiar kubectl commands for deployment, management, and automation of volume management flows while also providing Azure native user experience support through Azure Portal, CLI, and PowerShell.
Leandro Carvalho, a cloud solution architect – Support for Mission Critical at Microsoft, tweeted:
#Azure Container Storage is a game-changer for stateful apps on #AKS.
In addition, Dr. Ian McDonald, an EMEA cloud solution architect director, tweeted:
Great to see – making storage available to containers that can scale more quickly and solve issues such as high IOPS needed for small storage.
More details on Azure Container Storage are available on the documentation landing page. Additionally, it is available in the East US, West Europe, West US 2, and West US 3 regions by signing up through a short survey.
Lastly, the pricing for Azure Container Storage comprises two components: the cost of the underlying storage the customer uses and a service fee for Azure Container Storage orchestration. The details of the pricing are available on the pricing page.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
NoSQL (Not Only SQL) is a database mechanism developed for storage, analysis and access of large volume of unstructured data. NoSQL allows schema-less data storage which is not possible with relational database storage. The benefits of using NoSQL database include high scalability, simpler designs and higher availability with more precise control. The ability to comfortably manage big data is another significant reason for the adoption of NoSQL databases. The lack of awareness regarding NoSQL benefits over relational database approaches is a major restraint for the wider adoption of NoSQL technology. Lack of infrastructure to support NoSQL solutions also limits its adoption among enterprises. However, in the next few years, the awareness would increase and NoSQL databases would witness rapid adoption in order to support explosively increased business data, especially, in social networks, retail, e-commerce, etc.
This NoSQL Market Report gives evaluation and insights primarily based on authentic consultations with necessary gamers such as CEOs, Managers, Department Heads of Suppliers, Manufacturers, and Distributors etc.
KEY PLAYERS, , MongoDB, Couchbase, Amazon Web Services, Aerospike, Neo4j, InfiniteGraph, Basho Technologies, Hypertable Inc., Apache Cassandra, MarkLogic
Request To Download Sample of This Strategic Report:-:https://reportocean.com/industry-verticals/sample-request?report_id=30957
The market is segmented into types of NoSQL databases, into categories such as document store, key-value stores, graph based databases and column storage. According to applications, the market is classified into data storage, e-Commerce, web applications, social networking, mobile applications and data analytics. The NoSQL databases are found useful in industrial verticals such as retail, gaming, IT and others. Geographically, the market is segmented across North America, Europe, Asia-Pacific and Latin America, Middle East & Africa (LAMEA).
The passage highlights the trend of governmental organizations increasing their maintenance budgets for system infrastructure while also investing in initiatives for project development, modernization, and enhancement. This has led to the success of investments and an increase in the annual funding set aside by ICT vendors for the growth of the online market. The passage also mentions the anticipated increase in global ICT exports, which are expected to rise by an average of 3.9% yearly from US$ 784.3 billion in 2021 to US$ 955.19 billion in 2030. The global supply of ICT has increased by 9.5% yearly since 2009.
In terms of global ICT exports in 2021, Ireland ranked first with US$ 169.32 billion, followed by the United States at number 2, India at number 3, and China at number 4. The passage also notes the significant increase in Brunei’s global ICT exports by 228.2% year over year since 2009, while Sierra Leone’s global ICT exports have decreased by 61.7% year over year in the same period. Overall, the passage highlights the growth and potential of the global ICT market, driven by increased investments and funding for infrastructure and project development.
Download Free Sample of This Strategic Report:https://reportocean.com/industry-verticals/sample-request?report_id=30957
It seems that the ICT industry in Europe is predicted to experience moderate growth in the coming years, with an annual increase of 1.5% expected from 2021 to 2026. Germany currently holds the top position in terms of ICT revenue in Europe, followed by the United Kingdom, France, and Ireland. It’s interesting to note that while some countries like Malta have experienced significant growth in the ICT industry since 2016, others like Italy have seen a slight decline. This information can be useful for businesses and investors looking to enter or expand in the European ICT market.
KeyBenefits
Competitive advantages of NoSQL features described in the report highlight the market potential in a comprehensive manner
Value chain analysis provides key inputs on the role of all key intermediaries in the market which would better help in devising appropriate strategies
Porter’s five force analysis highlights the potency of suppliers & buyers along with a competitive scenario of the market, which would facilitate efficient business planning
Estimations are made in accordance to the current market scenario and projected future trends for the analysis period of 2014-2020, with base figures of 2013
MARKET SEGMENTATION
The global NoSQL market is segmented on the basis of types, applications, verticals and geographies.
MARKET BY TYPE
Key-value store
Document databases
Column based stores
Graph database
MARKET BY APPLICATION
Data storage
Metadata store
Cache memory
Distributed data depository
E-Commerce
Mobile Apps
Web applications
Data analytics
Social networking
To Get More Business Strategies For Request Sample Report:https://reportocean.com/industry-verticals/sample-request?report_id=30957
MARKET BY VERTICALS
Retail
Online gaming
IT
Social network development
Web applications management
Others
Government
BFSI
Healthcare
Education
MARKET BY GEOGRAPHY
North America (NA)
Europe (EU)
Asia-Pacific (APAC)
Latin America, Middle-East and Africa (LAMEA)
Table of Content:
- Report Overview
- Global Growth Trends
- Competition Landscape by Key Players
- Data Segments
- North America Market Analysis
- Europe Market Analysis
- Asia-Pacific Market Analysis
- Latin America Market Analysis
- Middle East & Africa Market Analysis
- Key Players Profiles Market Analysis
- Analysts Viewpoints/Conclusions
- Appendix
Some of the Key Questions Answered in this Report:
- What is the Market dimension at the regional and country-level?
- What are the key drivers, restraints, opportunities, and challenges of the Market, and how they are anticipated to influence the market?
- What is the international (North America, Europe, Asia-Pacific, South America, Middle East and Africa) income value, manufacturing value, consumption value, import and export of Market?
- Who are the world key producers of the Market Industry? How is their working state of affairs (capacity, production, sales, price, cost, gross, and revenue)?
- What are the Market possibilities and threats confronted via the carriers in the world Market Industry?
- Which application/end-user or product kind may also be seeking for incremental boom prospects? What is the market share of every kind and application?
- What targeted method and constraints are keeping the Market?
- What are the distinct sales, marketing, and distribution channels in the world industry?
- What are the upstream uncooked substances and manufacturing gear of Market alongside with the manufacturing technique of Market?
- What are the key market tendencies impacting the increase of the Market?
- Economic have an impact on the Market enterprise and improvement vogue of the Market industry.
Request full Report: https://reportocean.com/industry-verticals/sample-request?report_id=30957
About Report Ocean:
We are the best market research reports provider in the industry. Report Ocean is the world’s leading research company, known for its informative research reports. We are committed to providing our clients with both quantitative and qualitative research results. As a part of our global network and comprehensive industry coverage, we offer in-depth knowledge, allowing informed and strategic business conclusions to report. We utilize the most recent technology and analysis tools along with our own unique research models and years of expertise, which assist us to create necessary details and facts that exceed expectations.
Get in Touch with Us:
Report Ocean:
Email: sales@reportocean.com
Address: 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611 – UNITED STATES
Tel:+1 888 212 3539 (US – TOLL FREE)
Website: https://reportocean.com
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
DataStax on Wednesday said that it was partnering with Houston-based startup ThirdAI to bring large language models (LLMs) to its database offerings, such as DataStax Enterprise for on-premises and NoSQL database-as-a-service AstraDB.
The partnership, according to DataStax’s chief product officer, Ed Anuff, is part of the company’s strategy to bring artificial intelligence to where data is residing.
ThirdAI can be installed in the same cluster, on-premises or in the cloud, where DataStax is running because it comes with a small library and the installation can be processed with Python.
“The benefit is that the data does not have to move from DataStax to another environment, it is just passed to ThirdAI which is adjacent to it. This guarantees full privacy and also speed because no time is lost in transferring data over a network,” a DataStax spokesperson said.
“ThirdAI can be run as a Python package or be accessed via an API, depending on the customer preference,” the spokesperson added.
Enterprises running DataStax Enterprise or AstraDB can use the data residing in those databases and ThirdAI’s tech and LLMs to spin up their own generative AI applications. The foundation models from ThirdAI can be trained to understand data and answer queries, such as which product recommendation would likely result in a sale, based on a customer’s history.
The integration of ThirdAI’s LLMs will see DataStax imbibe the startup’s Bolt technology, which can achieve better AI training performance on CPUs compared to GPUs for relatively smaller models. The advantage of this is that CPUs are generally priced lower than GPUs, which are usually used for AI and machine learning workloads.
“The Bolt engine, which is an algorithmic accelerator for training deep learning models, can reduce computations exponentially. The algorithm achieves neural network training in 1% or fewer floating point operations per second (FLOPS), unlike standard tricks like quantization, pruning, and structured sparsity, which only offer a slight constant factor improvement,” ThirdAI said in a blog post.
“The speedups are naturally observed on any CPU, be it Intel, AMD, or ARM. Even older versions of commodity CPUs can be made equally capable of training billion parameter models faster than A100 GPUs,” it added.
Bolt can also be invoked by “just a few” line changes in existing Python machine learning pipelines, according to ThirdAI.
The announcement with ThirdAI is the first in a new partnership program that DataStax is setting up to bring in more technology from AI startups that can help enterprises with data residing on Datastax databases develop generative AI applications.
Next read this:
MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ
A recent paper by a group of researchers at OpenAI outlines a novel approach to solve one of the limitations of current deep neural networks (DNNs), namely their lack of interpretability. By Using GPT-4, the researchers aim to build a technique to explain what events cause a neuron to activate, as a first step towards automating DNN interpretability.
OpenAI approach to DNN interpretability consists in three steps: generating an explanation of the neuron’s behavior, simulating the neuron’s activation based on the explanation, and calculating a score for the explanation.
In the first step, a prompt is sent to the explainer model, which will generate some explanation of the neuron’s activation. For example, one explanation could look like: “Explanation of neuron 1 behavior: the main thing this neuron does is find phrases related to community”.
Once an explanations is available, the next step is using it to simulate the neuron’s behavior. This means determining how the neuron activates for each token in a particular sequence based on the hypothesis that the found explanation is correct. This will produce a list of tokens and integers between 0 and 10, representing the probability of activation.
In the third step, the aims is scoring an explanation by comparing the simulated and actual neuron behavior. This can be accomplished by comparing the list produced in the simulation step with the output produced by the real neuron for the same list of tokens. This step is the most complex of the three and admits a number of different algorithms producing distinct results.
Using this strategy, OpenAI researchers have been able to find likely explanation for non-trivial neurons, such as a neuron for phrases related to certainty and confidence, another for things done correctly, and many more. The results are still preliminary, though, as a number of fundamentals questions remain, including whether neurons’ behavior admits an explanation, say the researchers.
DNN interpretability is still very much a research topic pursuing the goal of providing an explanation of DNN behavior in terms that are understandable to a human and related to the domain application.
Interpretability is key to allow a human supervisor to understand whether a DNN is behaving as expected and thus can be trusted. This property can be crucial where DNN failure may cause catastrophic results. Additionally, it can help engineers to identify the root causes of DNN misbehavior.
Interpretability also has ethical and legal implications. For example, European laws establish that people have the right not to be subject to algorithmic decisions and to obtain human intervention, which would be impossible if the human controller had no means to interpret the algorithmic decision.
If you are interested in the details of OpenAI’s approach to interpret DNNs, do not miss their original article, which includes prompt examples and a full discussions of scoring validation techniques, results, limitations, and alternative evaluations algorithms.
Breaking Down Barriers: Introducing JDK 21’s Approach to Beginner-Friendly Java Programming
MMS • A N M Bazlur Rahman
Article originally posted on InfoQ. Visit InfoQ
JEP 445, Unnamed Classes and Instance Main Methods (Preview), has been promoted from its Proposed to Target to Targeted status. This feature JEP, formerly entitled Implicit Classes and Enhanced Main Methods (Preview), proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This is a preview language feature.
This JDK Enhancement Proposal (JEP) aims to make Java more approachable for beginners and is led by Brian Goetz, a renowned engineer at Oracle Corporation and the Java language architect. Goetz has detailed the context of this proposal on the OpenJDK website, titled Paving the on-ramp, acknowledging that while Java is a widely taught language and appreciated for its simplicity, it does present some initial challenges to newcomers. Specifically, beginners often find the necessity of declaring a class and understanding the "public static void main"
method to be somewhat challenging concepts to grasp.
The proposal puts forth three significant changes to address these concerns: the introduction of a more forgiving launch protocol; the inclusion of unnamed classes; and the establishment of predefined static imports for vital methods and fields. These changes are expected to simplify the learning process and make Java a more welcoming language for those embarking on their programming journey in jshell
or Notepad or a full-fledged IDE.
This adjustment allows the classic "Hello, World!"
program to be simplified to:
class HelloWorld {
void main() {
System.out.println("Hello, World!");
}
}
The second change makes the class declaration implicit. This further simplifies the "Hello, World!"
program to:
void main() {
System.out.println("Hello, World!");
}
The third change is not demonstrated in the above snippets but involves predefined static imports for essential methods and fields.
As a preview feature, JEP 445 needs to be explicitly enabled when using JDK 21. Developers can do so by compiling the program with the --enable-preview
flag:
javac --release 21 --enable-preview Main.java
And then running the program with the --enable-preview
flag:
java --enable-preview Main
Alternatively, if developers prefer to use the source code launcher, they can run the program with the --enable-preview
flag:
Developers should remember to replace 'Main'
in the above commands with the actual name of their Java file or class.
The Java Language Specification (JLS) is being updated with a more flexible launch protocol to offer greater flexibility in declaring a program’s entry point and to allow instance main methods. This protocol enables the main method of a launched class to have public
, protected
, or default
access, and it also provides support for static main methods with no parameters. If a class doesn’t have a static main method but contains a non-private zero-parameter constructor and a non-private instance main method, then an instance of the class can be constructed, and the instance main method can be invoked. This flexibility allows programs, such as "Hello, World!"
, to be written without access modifiers, static
modifiers or a String[]
parameter. Also, it issues a warning at runtime if there is a change in behavior due to the invocation of an instance main instead of an inherited “legacy” main method.
Java is now introducing the concept of unnamed classes to further simplify the language for beginners and small programs. These unnamed classes, always part of the unnamed package, are helpful for standalone programs or program entry points. When the Java compiler encounters a method not enclosed in a class declaration, it will consider such methods, any unenclosed fields, and classes declared in the file as members of an unnamed top-level class. Unnamed classes are final
, cannot implement any interface or extend any class other than Object
, and cannot be referenced by name. However, they must contain a main method that can be launched, a requirement enforced by the Java compiler.
This new feature allows developers to write programs without explicit class declarations. For example, "Hello, World!"
can be written just as a method or using a field, with top-level members interpreted as part of the unnamed class. If an unnamed class has an instance main method, launching it is equivalent to employing the existing anonymous class declaration construct. Moreover, a source file containing an unnamed class can be launched with the source-code launcher, with the Java compiler compiling the file into a launchable class file. So developers can write the program as:
String greeting() { return "Hello, World!"; }
void main() {
System.out.println(greeting());
}
or, using a field as:
String greeting = "Hello, World!";
void main() {
System.out.println(greeting);
}
This simplification enhances Java’s flexibility and ease of use, especially for beginners still getting comfortable with core programming concepts.
Developers who want to experiment with these new features can download the OpenJDK from the OpenJDK JDK 21 Early-Access Builds.
Another alternative is to use SDKMan, a software development kit manager, to download and manage different versions of Java. SDKMan can be used via the command line, which can make the process easier for developers who prefer this method.
However, these are early-access builds, so they may not be as stable as the final release, scheduled for September 2023, and are intended for testing and feedback purposes.
ASP.NET Core in .NET 8 Preview 4: Blazor Streaming, Form Handling, Native AOT, Identity API and More
MMS • Almir Vuk
Article originally posted on InfoQ. Visit InfoQ
The latest release of .NET 8 Preview 4 brings significant improvements to ASP.NET Core. Notable enhancements include Blazor’s streaming rendering and form handling, expanded support for form binding in minimal APIs, Native AOT compilation for improved performance, enhanced authentication and authorization with Identity API endpoints, and the addition of metrics for application monitoring.
The first noteworthy area of improvement is reserved for Blazor, with the latest preview release of .NET 8, a significant enhancement has been made to Blazor’s server-side rendering (SSR) capabilities. With the introduction of streaming rendering, developers can now stream content updates on the response stream when using SSR with Blazor. This feature allows developers to render pages with placeholder content while async operations are executed, ensuring the main layout of the application is swiftly displayed. To enable streaming rendering, developers need to include the new Blazor script and apply the [StreamRendering(true)]
attribute to the desired component.
Also, Blazor SSR now allows the utilization of Blazor components to handle form submissions, enabling server-side processing. To enable form submission handling from the server, developers can set up a model binding context using the CascadingModelBinder component, defining forms using the EditForm component and corresponding input components. However, while model binding and request data validation support is currently pending implementation, developers can manually handle request data using the FormDataProvider service.
Other notable Blazor-related changes include the support for using client-side routing to navigate to a specific HTML element on a page using standard URL fragments and Webcil packaging for Blazor WebAssembly apps.
Regarding the API authoring in ASP.NET Core, it has received attention as well. The framework now offers expanded support for form binding in minimal APIs, making it easier to handle and process form data. Furthermore, the API project template now includes a .http file, which allows developers to author and test HTTP requests directly within the project, simplifying the development workflow.
Native AOT (Ahead-of-Time) Compilation gained a significant amount of improvement in .NET 8 Preview 4. Among these advancements is automatic logging and exception handling for parameter binding failures in both runtime-generated and compile-time-generated minimal APIs, which simplifies error tracking and handling during parameter binding. Another significant addition is the inclusion of annotations on subsystem entry points to identify features incompatible with Native AOT. These annotations serve as warnings for developers, alerting them to potential issues with reliability. For example, invoking the AddControllers
method in an application with Native AOT enabled will trigger a warning, indicating its lack of trim safety.
Other notable Native AOT-related changes include a reduced app size with configurable HTTPS support, the inclusion of the --aot
flag in Worker Service templates for AOT publishing, additional default services in the slim builder, and JSON configuration changes in API templates.
In terms of authentication and authorization, introduces MapIdentityApi()
, an extension method that adds new API endpoints for user registration and login (/register and /login). This enhancement aims to simplify the use of ASP.NET Core Identity for authentication in JavaScript-based single page apps (SPA) and Blazor apps. The JSON API endpoints provided by MapIdentityApi offer a more suitable solution for SPA apps and non-browser apps, replacing the default UI based on Razor Pages. Planned features for the identity API endpoints include support for two-factor authentication and email verification, as outlined in the ASP.NET Core GitHub repository.
Furthermore, significantly enhanced support for custom authorization policies has been introduced with the IAuthorizationRequirementData
interface. This simplifies policy implementation by including the associated requirements in the attribute definition, reducing code complexity and improving maintainability. These improvements streamline the development workflow, offering developers increased flexibility in managing authorization within their applications.
Also, ASP.NET Core metrics have been enhanced to provide developers with better insights into the performance and behaviour of their applications. This update leverages the System.Diagnostics.Metrics
API, offering an improved approach to data reporting and collection, supporting a range of measurements including counters, gauges, and histograms. Notably, the integration of metrics aligns with OpenTelemetry standards, ensuring seamless compatibility with the wider cloud-native ecosystem. Initially implemented for ASP.NET Core hosting, Kestrel, and SignalR, the future roadmap includes expanding metrics support to encompass additional APIs within the .NET framework.
Lastly, the comment section on the original release blog post has been buzzing with mixed reactions, as some users expressed disappointment regarding the significant time investment in Blazor, while others praised its productivity and effectiveness. For a comprehensive understanding of the various perspectives, it is highly recommended for users to explore the comment section and engage in the ongoing discussion.
Public Cloud Non-Relational Databases And Nosql Database Market to Witness Increase in …
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
“
The Global Public Cloud Non-Relational Databases And Nosql Database market 2023 offers a comprehensive assessment of the industry including definitions, classifications, applications and industry restraint structure, which is beneficial for companies regardless of their size and revenue. This Study report covers the major market understandings and industry approach towards COVID-19 in the upcoming years. This report analyses region-wise revenue and volume for the forecast period of 2023 to 2030. For each company covered, the clients will find the report complete in all aspects as it covers all key components with valuable statistics and expert opinions in all regards. This analysis will help the reader to understand the potential worth of investment in a particular region.
Our research team has methodically performed quantitative and qualitative assessments of the Public Cloud Non-Relational Databases And Nosql Database market dynamics, considering a slew of features, including market penetration, portfolios, end-user industries, pricing structure and the key drivers, restraints, opportunities and challenges mostly affecting Public Cloud Non-Relational Databases And Nosql Database market growth.
Get a Sample Report + All Related Graphs & Charts (with COVID 19 Impact Analysis): https://globalmarketvision.com/sample_request/208370
Key Players in the Global Public Cloud Non-Relational Databases And Nosql Database Market
This report on the global Public Cloud Non-Relational Databases And Nosql Database Market contains a list of some leading companies in the market. Furthermore, it also includes detailed information on the competitors and recent developments done in the market. The gathered information speaks about the manufacturers and the global revenue with the production data from manufacturers over the forecast period.
The leading competitor’s profiles in the global Public Cloud Non-Relational Databases And Nosql Database Market are:
Ibm, Mongodb Inc, Aws(Amazon Web Services), Apache Software Foundation, Neo Technologies (Pty) Ltd, Intersystems, Google, Oracle Corporation, Teradata, Datastax, Software Ag
The report presents a thorough overview of the competitive landscape of the global Public Cloud Non-Relational Databases And Nosql Database Market and the detailed business profiles of the market’s notable players. Threats and weaknesses of leading companies are measured by the analysts in the report by using industry-standard tools such as Porter’s five force analysis and SWOT analysis. The Public Cloud Non-Relational Databases And Nosql Database Market report covers all key parameters such as product innovation, market strategy for leading companies, Public Cloud Non-Relational Databases And Nosql Database market share, revenue generation, the latest research and development and market expert perspectives.
Global Public Cloud Non-Relational Databases And Nosql Database Market Segmentation:
By Product Types are:
Key Value Storage DatabaseColumn Storage DatabaseDocument DatabaseGraph Database
By Applications are:
Automatic Software PatchingAutomatic BackupMonitoring And IndicatorsAutomatic Host Deployment
Geographically, the detailed analysis of consumption, revenue, market share, and growth rate of the following regions:
- North America (United States, Canada, Mexico)
- Asia-Pacific (China, India, Japan, Taiwan, South Korea, Australia, Indonesia, Singapore, Malaysia, Rest of Asia-Pacific)
- Europe (Germany, France, UK, Italy, Spain, Russia, Rest of Europe)
- Central & South America (Brazil, Argentina, Rest of South America)
- Middle East & Africa (Saudi Arabia, UAE, Turkey, Rest of Middle East & Africa)
The global Public Cloud Non-Relational Databases And Nosql Database market is segmented on the basis of product type, application, and end-use industries. The growth amongst the different segments helps you in attaining the knowledge related to the different growth factors expected to be prevalent throughout the market and formulate different strategies to help identify core application areas and the difference in target markets. The report also incorporates the most recent kinds of progress and enhancements in the business space that are seemingly going to impact this business space.
Frequently Asked Questions (FAQ):
- What is the market size of the Public Cloud Non-Relational Databases And Nosql Database Market?
- What are some of the major drivers for this market?
- Who are the major players in the Public Cloud Non-Relational Databases And Nosql Database market?
- What is the impact of COVID-19 on the Public Cloud Non-Relational Databases And Nosql Database market?
- Which region has the highest growth potential in the Public Cloud Non-Relational Databases And Nosql Database market?
Highlighting the key points included in the Public Cloud Non-Relational Databases And Nosql Database Market report:
- key statistics on the global Public Cloud Non-Relational Databases And Nosql Database market status and key manufacturers.
- Basic scenario of the target market along with its various applications and technology, definition.
- The report offers the corporate profile, capacity, production value, product specifications, and market shares for every key vendor.
- Detailed market segmentation by type, company, by application, by region for the competitive breakdown analysis.
- Accurate report estimation with market development trends of global Public Cloud Non-Relational Databases And Nosql Database industry.
- Comprehensive analysis of upstream raw materials, downstream demand, and recent growth prospects of Public Cloud Non-Relational Databases And Nosql Database market.
- Thorough geographical landscape with dominated regions.
Direct Purchase this Market Research Report Now @ https://globalmarketvision.com/checkout/?currency=USD&type=single_user_license&report_id=208370
If you have any special requirements, please let us know and we will offer you the report at a customized price.
About Global Market Vision
Global Market Vision consists of an ambitious team of young, experienced people who focus on the details and provide the information as per customer’s needs. Information is vital in the business world, and we specialize in disseminating it. Our experts not only have in-depth expertise, but can also create a comprehensive report to help you develop your own business.
With our reports, you can make important tactical business decisions with the certainty that they are based on accurate and well-founded information. Our experts can dispel any concerns or doubts about our accuracy and help you differentiate between reliable and less reliable reports, reducing the risk of making decisions. We can make your decision-making process more precise and increase the probability of success of your goals.
Get in Touch with Us
Sarah Ivans | Business Development
Phone: +1 617 297 8902
Email: [email protected]
Global Market Vision
Website: www.globalmarketvision.com
”
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
The report investigates the current status of the NoSQL Databases Software Market and analyses the future trends of the NoSQL Databases Software market. The report explores the market opportunities available in the NoSQL Databases Software market. The report assesses the NoSQL Databases Software market sourced from the currently available data. The report provides in-depth information of the NoSQL Databases Software market that helps market players understand and analyse the NoSQL Databases Software industry in terms of key products and services, value-added products, emerging markets, and industries. The report provides basic analysis of the NoSQL Databases Software market. The report determines the current production and future demand for the products and services, and assists the market players in planning for investment. The report analyses the major exporting and importing producers, overview of the industry, preliminary and secondary assessment of its future potential. The report summarizes the knowledge gaps and recommendations.
Key Players in the NoSQL Databases Software market:
MongoDB, Amazon, ArangoDB, Azure Cosmos DB, Couchbase, MarkLogic, RethinkDB, CouchDB, SQL-RD, OrientDB, RavenDB, Redis
Request a sample report : https://www.mraccuracyreports.com/report-sample/204170
The report studies the NoSQL Databases Software market using cross-sectional multiple regression analysis. The report provides estimates for future market demand. The report also uses secondary analysis to examines the NoSQL Databases Software market. The report provides detailed analysis NoSQL Databases Software market value chain. The report analyses the factors affecting the NoSQL Databases Software market. The report lists the data and trends that studies various components of the NoSQL Databases Software market. The report reviews the current NoSQL Databases Software market production and price patterns. The report reviews the production, imports, and profitability segments.
NoSQL Databases Software Market Types:
Cloud Based, Web Based.
NoSQL Databases Software Market Applications:
Large Enterprises, SMEs
Access full Report Description, TOC, Table of Figure, Chart, etc. @ https://www.mraccuracyreports.com/reportdetails/reportview/204170
This report includes data on NoSQL Databases Software market and analysis of sales data, consumption, production and the developments affecting state of the NoSQL Databases Software market. The report looks at the policy and regulations, competitive product positioning, technological innovation, cost performance, demand determination, and more. This report links you to the market to enhance opportunities. The report looks at the historical data, market segments, producing countries, domestic and global demand for certain products and services. The report examines the value chain, trade scenario, changes in industry structure in past few years, new changes, and impact of the new changes on the investors.
The report focuses on the key segments and investment planning initiatives. The report primarily discusses the NoSQL Databases Software industry considering the global scenario and presents different market scenarios to get a clear understanding of the issues and dynamism of this industry. Secondary and primary sources are covered to get relevant information to the market in this report. In pursuit of reliability and relevance, government publications, official websites, news sources, and more are considered in the report.
Do Inquiry before Accessing Report at: https://www.mraccuracyreports.com/checkout/204170
MR Accuracy Reports is the number one publisher in the world and have published more than 2 million reports across globe. Fortune 500 companies are working with us. Also helping small players to know the market and focusing on consulting.
MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ
Meta AI Research open-sourced DINOv2, a foundation model for computer vision (CV) tasks. DINOv2 is pretrained on a curated dataset of 142M images and can be used as a backbone for several tasks, including image classification, video action recognition, semantic segmentation, and depth estimation.
Meta based the model on the Vision Transformer (ViT) architecture, with modifications for self-supervised learning objectives. To train the model, the team built an automated pipeline to build a curated dataset of images scraped from the web. A major contribution of the work was improving in the training process, which is twice as fast and uses one-third the memory of previous approaches. When evaluated on CV benchmarks, DINOv2 outperformed other self-supervised learning (SSL) models and showed performance comparable to or better than that of weakly-supervised models. According to Meta,
Going forward, the team plans to integrate this model, which can function as a building block, in a larger, more complex AI system that could interact with large language models. A visual backbone providing rich information on images will allow complex AI systems to reason on images in a deeper way than describing them with a single text sentence. Models trained with text supervision are ultimately limited by the image captions. With DINOv2, there is no such built-in limitation.
Deep learning models for CV tasks have typically relied on large datasets of images with human annotations; for example, ImageNet. In 2021, OpenAI released CLIP, a foundation model for CV that was trained using a form of weak supervision, where the annotations were automatically derived by scraping html tags and other web-based metadata associated with source images. That same year, Google published the ViT model, which uses SSL for training, and Meta published their work on the original version of DINO, which combines a ViT model with knowledge distillation, which resulted in smaller models with comparable performance.
For DINOv2, Meta focused on gathering more training data and scaling up the training process. For the training data, Meta collected 1.2B unique images from the internet, then clustered them according to their similarity to the images in the ImageNet dataset for a final set of 142M images. To scale up training, Meta implemented a custom version of FlashAttention and used Fully-Sharded Data Parallel (FSDP) training with PyTorch. Overall, the project consumed about 200k GPU-days of compute.
To evaluate DINOv2’s performance as a foundation model, the team tested it on a variety of CV tasks and compared it to several baseline SSL models as well as weakly-supervised models such as CLIP. On the ImageNet-1k classification task, DINOv2 showed a “very significant improvement” compared to other SSL models and also outperformed the weakly-supervised ones. It also set a new SSL state-of-the-art record on three video action recognition benchmarks and outperformed baselines on instance-level recognition benchmarks and on three monocular depth estimation benchmarks.
In a Hacker News discussion about the work, several users praised Meta’s recent work in computer vision as well as past contributions such as PyTorch. One did note a shift in Meta’s communications around their work:
As a grad student in this field, Meta has always had great contributions to the open source machine learning effort, through no small effort of Yann LeCun’s internal advocacy. What has changed recently is their PR strategy: [OpenAI] has basically shown everybody that it doesn’t matter if you have the best models if your publicity sucks.
The DINOv2 code and models are available on GitHub. The project site hosts an interactive demo of several computer vision tasks using DINOv2.