Month: April 2023
MMS • A N M Bazlur Rahman
Article originally posted on InfoQ. Visit InfoQ
JEP 430, String Templates (Preview), has been promoted from Proposed to Target to Targeted status for JDK 21, a feature JEP type that proposes to enhance the Java programming language with string templates, which are similar to string literals but contain embedded expressions incorporated into the string template at run time.
Java developers can now enhance the language’s string literals and text blocks with string templates that can produce specialized results by coupling literal text with embedded expressions and processors. The aim of this new feature is to simplify the writing of Java programs, improve the readability of expressions that mix text and expressions, and enhance the security of Java programs that compose strings from user-provided values.
This JEP introduces a new kind of expression called a template expression that allows developers to perform string interpolation and compose strings safely and efficiently. Template expressions are programmable and not limited to composing strings. They can turn structured text into any kind of object according to domain-specific rules. In a template expression, a template processor combines the literal text in the template with the values of any embedded expressions at runtime to make a result. Examples are as follows:
String name = "Joan";
String info = STR."My name is {name}";
assert info.equals("My name is Joan"); // true
A template expression has a similar syntax to a string literal, but with a prefix. The second line of the above code contains a template expression.
In contrast, usually, String interpolation allows programmers to combine string literals and expressions into a single string, as many programming languages use, providing greater convenience and clarity than traditional string concatenation. However, it can create dangerous strings that may be misinterpreted by other systems, especially when dealing with SQL statements, HTML/XML documents, JSON snippets, shell scripts, and natural-language text. To prevent security vulnerabilities, Java requires developers to validate and sanitize strings with embedded expressions using escape or validate methods.
A safer and more efficient solution would be to introduce a first-class, template-based mechanism for composing strings that automatically applies template-specific rules to the string, resulting in escaped quotes for SQL statements, no illegal entities for HTML documents, and boilerplate-free message localization. This approach relieves developers from the manual task of escaping each embedded expression and validating the entire string. Exactly that’s what is done by the template expression that Java adheres to as opposed to String interpolation used by other popular programming languages.
The design of the template expression makes it impossible to go directly from a string literal or text block with embedded expressions to a string with the expressions’ values interpolated. This is to prevent dangerously incorrect strings from spreading through the program. Instead, a template processor, such as STR, FMT or RAW, processes the string literal, validates the result, and interpolates the values of embedded expressions.
Here are some examples of template expressions that use multiple lines to describe HTML text, JSON text, and a zone form:
String title = "My Web Page";
String text = "Hello, world";
String html = STR."""
{title}
{text}
""";
Which yields the following output:
| """
|
|
| My Web Page
|
|
| Hello, world
|
|
| """
Another example is as follows:
String name = "Joan Smith";
String phone = "555-123-4567";
String address = "1 Maple Drive, Anytown";
String json = STR."""
{
"name": "{name}",
"phone": "{phone}",
"address": "{address}"
}
""";
Similarly, this produces the following output.
| """
| {
| "name": "Joan Smith",
| "phone": "555-123-4567",
| "address": "1 Maple Drive, Anytown"
| }
| """
Another example:
record Rectangle(String name, double width, double height) {
double area() {
return width * height;
}
}
Rectangle[] zone = new Rectangle[] {
new Rectangle("Alfa", 17.8, 31.4),
new Rectangle("Bravo", 9.6, 12.4),
new Rectangle("Charlie", 7.1, 11.23),
};
String form = FMT."""
Description Width Height Area
%-12s{zone[0].name} %7.2f{zone[0].width} %7.2f{zone[0].height} %7.2f{zone[0].area()}
%-12s{zone[1].name} %7.2f{zone[1].width} %7.2f{zone[1].height} %7.2f{zone[1].area()}
%-12s{zone[2].name} %7.2f{zone[2].width} %7.2f{zone[2].height} %7.2f{zone[2].area()}
{" ".repeat(28)} Total %7.2f{zone[0].area() + zone[1].area() + zone[2].area()}
""";
The above code produces the following output:
| """
| Description Width Height Area
| Alfa 17.80 31.40 558.92
| Bravo 9.60 12.40 119.04
| Charlie 7.10 11.23 79.73
| Total 757.69
| "
Java provides two template processors for performing string interpolation: STR
and FMT
. STR
replaces each embedded expression in the template with its (stringified) value, while FMT
interprets format specifiers that appear to the left of embedded expressions. The format specifiers are the same as those defined in java.util.Formatter
. In cases where the unprocessed template is needed, the standard RAW
template processor can be used. This processor simply returns the original template without any interpolation or processing.
Furthermore, developers can create their own template processors for use in template expressions. A template processor is an object that provides the functional interface ValidatingProcessor
, and its class implements the single abstract method of ValidatingProcessor, which takes a StringTemplate
and returns an object. Template processors can be customized to perform validation at runtime and return objects of any type, not just strings.
In conclusion, template expressions in Java make it easy and safe for developers to do string interpolation and string composition.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
How long until usage trends start to move the other way?
Have you ever seen one of those rope bridges strung across a river in a jungle or high across a canyon. Scary as heck, at least for this writer. Investing in high growth IT shares these days reminds me of that as much as anything else. It is possible to see the objective on the other side, but the bridge, often made of plant material and a few slats of wood and rope, sways and seems terribly unsteady. And the bridge seems to sway in every breeze, and one can see storm clouds all around. Perhaps not the most comfortable of metaphors but in some regards a realistic description of the environment.
The storm clouds of April 25th seemed a bit more ominous than some. Consumer confidence down, UPS (UPS) forecasting weak results as consumer behavior changes for the worse and First Republic (FRC) continuing to unravel and needs another rescue. Within the tech space, Tenable (TENB) indicated that some of its expected deals didn’t close at the end of the quarter due to uncertainties created by the turmoil in the banking sector created by the failure of Silicon Valley. And then came earnings reports from Google (GOOG) (GOOGL) and Microsoft (MSFT) that suggested that the reports of the demise of the IT space have been exaggerated.
This is an article about MongoDB (NASDAQ:MDB), and not an article about macro conditions. But I want to acknowledge again, if it necessary to do so, that investing in Mongo, as well as any other high growth IT name is unlikely to work until investors are willing to look across the chasm and consider the very substantial opportunities for Mongo that are outlined in the balance of this article.
It ought to be obvious after the last 18 months that high growth IT stocks simply won’t work as investments in a risk off environment. And it also ought to be evident that in an environment with macro headwinds, the percentage growth of IT companies is likely to compress. Further, the level of growth expected from software companies whose model is based on usage is sensitive to macro headwinds. At this point, none of this is new or unknown, and yet when macro data is seen as disappointing, investors sell high growth IT shares, even when the companies have already acknowledged the changed environment and altered their forecasts accordingly.
I think that this is a good time to consider the shares of MongoDB – not because I doubt that the macro environment is deteriorating; the signs of that are too pervasive and substantial to ignore – but because the shares and the company’s forecast reflects that deterioration. Mongo dominates a high growth space, i.e. that of the NoSQL database. It has technology advantages and its strategy of appealing to developers as the platform of choice has resonated. I am not trying to forecast when the Fed Board will finally acknowledge that their demand destruction has gone far enough. I think it is a matter of months, if not less, but I surely do not know. Obviously fixed income traders who are bidding down yields substantially at the moment feel they have a good insight on the Fed pivot.
I surely don’t know when sentiment toward high growth IT shares is going to finally pivot. Perhaps when the Fed actually articulates a victory over inflation, or when it actually pivots on rates. It is easy to see just how toxic current sentiment is – there have been numerous surveys of investor sentiment, including those of both institutional and retail components, that make that point – which is why valuations in the space have continued to compress even when results are better for companies such as Mongo and turn out to be better than feared. It can be difficult and lonely to recommend the shares of a company such as Mongo that will be battling macro headwinds. But to me, the logical time to enter a position is when everyone is already negative and is suggesting that earnings reports should be black bordered and positioned next to the obituary page.
Mongo shares may not seem cheap now, and on some measures they aren’t. But if one looks across the proverbial chasm and dares to cross that flimsy bridge, I think a case can be made as to why the shares are far cheaper than might be surmised by some commentators. The aphorism popularized by the sage of Omaha, Warren Buffett, about being greedy when others are fearful ought to be in play here. I will discuss the valuation at the end of this article, but while currently, based on the company’s recent guidance, the shares have an EV/S ratio of about 9.5X, and the company has a modest free cash flow margin, in a recovery scenario the components of that EV/S ratio will change substantially, and the company’s free cash flow margin will grow several times.
In the last several years many IT companies have developed business models based on consumption, rather than seats. It is, I believe, a win/win situation. The users actually pay for the value they receive, and are not stuck buying seats that are often under-utilized and which represent a fixed cost that hurts in times of economic stress. And most of the time, a usage paradigm is going to reward vendors with higher revenue in return for some variability and uncertainty. Over the course of years, the vendors will most likely wind up with more revenue, and conceptually, at least, new users are easier to sell, knowing that their software bills will be part of opex and not capex and will relate to the actual value they are receiving from an application.
But for at least the last 9 months the paradigm has been troublesome for the vendors, although perhaps comforting for the users. There is a great debate amongst software analysts and investors as to when “cloud optimization” will abate and usage growth will start to resume traditional patterns.
Much as I might wish for it, I certainly don’t have some crystal ball that provides me with specific insights on the subject. Recently the analyst team at Morgan Stanley released the findings generated from its AlphaWise IT survey. I didn’t think the findings were earth shaking; the survey indicated, about 75% of the respondents said that 2023 would be a year of cloud optimization/digestion which is consistent with what vendors have been reporting for some quarters at this point. The MS analyst team said they were surprised that this cohort of respondents said they would reaccelerate their cloud spending next year, and were planning on having an increasing percentage of applications reside in the public cloud post optimization.
IT spending optimization initiatives are a consistent theme for users. And once they happen, usage growth returns simply because usage is like data, there is inevitably more of it. As applications get deployed, and become part of the operating fabric of an enterprise, and as users find new insights to be had from applications, usage rises. It is not as though there is some well of new optimizations – they are based almost entirely on remediating workflows that have become inefficient, and that is not something that is done as a consistent process.
Before diving in to some of the relevant details around MDB’s current and future outlook, I think the following quote from the CEO on the latest conference call is worth parsing.
Our principal focus is acquiring — or said another way new applications, which is the biggest driver of our long-term growth. In our market, it’s important to understand that the unit of competition is the workload, getting both new and existing customers to deploy new workflows in our data platform is our overarching goal. Once the world has been onboarded its consumption growth is not something we can meaningfully influence. Some workloads will grow faster than others depending on the underlying business drivers for their specific application, the macro environment, seasonality and other factors.
While we cannot control the rate of growth of existing workloads, we do know workloads typically grow over time. So as long as we keep acquiring new workloads at a healthy rate, we are well-positioned for the long run.
In a broad sense, it is not possible for vendors to greatly influence consumption. Over time, the usage of most apps/workloads is going to grow. It is my opinion that long term investors should try not to focus too much on consumption which has a cyclical component that is beyond the control of Mongo and other vendors with usage based models. It can be easier to simply focus on reported revenue growth and look at statistics relating to the percentage change in the various metrics related to sales performance on a quarterly basis. But to some extent those metrics are backward rather than forward looking.
In my opinion, the aspect of the business on which to focus is the growth of new applications. Some of those will create major revenue streams, others will have less of an impact. With that as a backdrop, which companies make best sense as investments looking ahead to next year in an improving environment for IT spending growth? There are, to be sure, many IT companies these days with usage models which will benefit from a return to usage growth and will also benefit as applications migrate to the public cloud. And there are a number of IT vendors who have been able to acquire new customers and sell additional functionality to existing customers even though the revenue growth from that success is less now than it was 2-3 quarters ago. There isn’t lots of history about how consumption tracks either during a recession, or in a recovery. The last real recession in the economy was more than a decade ago, and none of these companies existed at that time in meaningful form. What is available is the data that was recorded during the initial stages of the Covid-19 lockdown, and then the recovery from that impact, as work from home paradigms were created. The companies themselves were surprised at the rapid increase in usage at that time.
At this point, I am looking for companies not so much from the point of view of their specific usage trends in these quarters of macro headwinds, but at companies that continue to acquire new workloads and are selling their user base on using their services for new applications. In my opinion, one of the best such companies is MongoDB and that is the subject of the rest of this article.
Aren’t MongoDB shares expensive?
Valuation is always a fraught question in looking at high growth IT companies. A year or so ago, critiques centering on profitability were easy to write when looking at Mongo. Initially profitability was not a concern of most investors, and Mongo certainly wasn’t reporting any. I first wrote about MongoDB on the pages of SA way back in 2018. It has come a long way since that time. Back then, it had a non-GAAP operating loss margin of about 33% and its free cash flow margin was negative 16%. When I next wrote about the company last April, the shares had already fallen by about 35%, but more was to come. At that point, the non-GAAP operating loss margin was about 6%, and the company’ 12 month operating cash flow margin had recently reached break even. The shares most recently made a low in mid-November, falling by another 60%+. Since that time the shares have seen a bit of a rally, albeit from a very compressed level, and are now up more than 50%, although far below the level they were when I last wrote about the company.
Is that compression enough? Valuation compression by itself is not a reason to buy stocks. Mongo is a fairly controversial stock on SA with a neutral rating, but brokerage analysts have a much more positive evaluation with most ratings at buy, although the average price target of $253 is just 10% above the current price. The company was profitable in the latest fiscal year, and generated a modest level of free cash flow, and it is projecting about 5% non-GAAP operating margins in the current year.
The company has projected revenues of $1.51 billion for the current fiscal year which is growth of 17%. The consensus analyst projection as depicted by 1st call for the following fiscal year, at least as published, is for revenue growth of 19%. I doubt that anyone owning the shares or contemplating making an investment in MongoDB believes that 19% number although it accounts for the modest price objective of many covering analysts. Revenue growth for Mongo, at least in the short-term, is significantly correlated with usage. Usage growth has been pressured for Mongo since the summer of 2022. Has usage growth reached its nadir?
The company during its latest conference call held in early March indicated that usage growth had actually returned to “normal” in February, after falling short of expectations in December and February. Normal, however is not at the rates of early 2022, but the rates the company had seen in last fiscal year’s Q2-3.
Obviously reported revenue growth in the 2nd half of this year and in fiscal ’25 is going to depend on usage trends, and they are not really knowable in advance. My guess is that usage growth will start to return toward its long-term trend in 2024, although whether it can actually reach such a level and when is more or less imponderable. That said I will devote some space in this article as to why I think Mongo’s usage growth, and thus its revenue growth is probably underestimated by the consensus. In fact, my 3 year CAGR estimate for the company is in the mid-high 30% range, with a steady ramp in terms of non-GAAP margins and free cash flow generation.
If one looks at Mongo shares solely based on historical data, or even the company’s projection for this current year, the shares still look expensive-although the forecast, like many other forecasts by high growth IT companies, appears to be de-risked. The current EV/S was 9.5X on April 25, and relative to average valuations these days, that isn’t a particular bargain. (Mongo shares are highly volatile and I picked the point at which I had finished writing this article as the date on which to compute the EV/S ratio). And 6% free cash flow margin is not a standout either. If one, however, looks at the growth opportunities, and the improving margin, an opposite conclusion is reached, and that is the one I think is most likely.
Why is Mongo still growing rapidly?
First of all, it isn’t AI. Or at least not directly AI. The company’s conference call last month was one of the few held by an IT vendor that didn’t include a mention of growth headwinds from AI. I am a firm believer in the AI revolution. I have written about it on SA, with an article on MSFT somewhat recently. But the reality is that the proliferation of AI applications have been going on for years now. What is new, is the emergence of generative AI, and the attention this has garnered for the technology. AI itself has been inside many applications for some time now – Salesforce (CRM) has offered Einstein and IBM (IBM) has offered Watson for more than 5 years. All of the modern cyber security companies have used AI for years to identify anomalies and potential breaches. There is, perhaps, some thought that the popularization of AI will lead to more workloads for Mongo – but that is a weak correlation. Mongo is not, and is not likely to be an “AI stock” and much as AI is revolutionary at some level, this is not the stock for readers looking to invest in that technology.
But that said, there are a few answers to the question as to why MongoDB is likely to remain in hyper growth mode, once the current economic environment changes, and will be in that mode for years to come. At the most basic level, Mongo offers a non-SQL database, which has tremendous advantages compared to the relational database model that has been in use for about 50 years. Relational databases simply can’t provide the performance necessary to ensure end users have a reasonable experience. Mongo has been, is, and will continue to be a company focused on developers. Developers find relational data base technology difficult to work with. The technology doesn’t really cope with unstructured data and it was never meant for use with internet workloads that require massive scaling over a brief time period. And, the dominant vendor in the space, Oracle (ORCL), is well known for aggressive and intrusive sales practices and contract terms. The database space was ripe for disruption when Mongo emerged offering its non-SQL technology.
That said the relational database market is enormous. While Oracle no longer reports relational database revenue explicitly, the relational market alone is apparently worth $70 billion in annual revenues. The market for non-SQL databases is still smaller than the market for legacy technology but growing several times faster – the linked analysis suggests a CAGR of 30% for the next several years. And Mongo is the dominant company in that space with a market share of 45%. Oracle does, of course, have an offering, MySQL, which it acquired when it acquired Sun in 2010. It exists, but is not an effective competitor in the space for users outside the Sun/Oracle ecosystem.
As management has stated many times, the real measure of growth for Mongo is that of workload acquisition. Not all workloads will have similar usage. New workloads are being continuously envisaged and constructed. Mongo’s percentage revenue growth has obviously declined. A year ago, before macro headwinds impacted usage trends substantially, the company was growing revenues by 57%. At that time, it noted that a small part of its business was being impacted by macro headwinds that had trimmed 1 percentage point from the sequential growth of Atlas, these days its dominant product offering. Last quarter revenue growth was 36%. Its forecast for revenue growth this fiscal year is 17%. The 17% forecast reflects a continuation, but not a further deterioration of the usage trends the company has experienced the last couple of quarters. The company’s forecast for its fiscal Q1 reflects the heightened slowdown from consumption over the holiday period. It also reflects the fact that Q1 has 3 fewer days than Q4, and that will obviously impact consumption revenue. In addition, the full year 17% growth forecast reflects a significantly smaller contribution from growth of what is called Enterprise Advanced, the company’s initial product. EA has had several quarters of growth that has been above trend; the company is forecasting that EA comparisons will be constrained because of these elevated year earlier period. Because Enterprise Advanced has a subscription/seat based pricing model, the current accounting conventions call for a substantial component of upfront revenue recognition, so changes in EA deployments have more substantial revenue impacts in the short-term than the growth in the company’s Atlas, cloud product.
I think everyone interested in, or invested in Mongo shares is well aware of this reported growth deceleration. What may not be as well appreciated is that through this time, new customer acquisition has remained at strong levels, and has not deteriorated as might be expected if there were existential demand issues. Specifically, the company’s direct sales customers, who are those with the highest level of contract value, have been increasing at 500/quarter over the past year. The smaller customers, which are best represented by the growth of Atlas users have been rising by about 1700/quarter, just slightly below trends earlier in 2022.
New customers are being sold as part of a paradigm of workload acquisition. There has been no real slowdown in workload acquisition, but workload acquisition impacts revenue over time as applications get written, deployed and go into production. So, not all of the new workload acquisition activity is reflected in usage/revenue thus far in 2023.
Another major source of growth relates to migration of users from their legacy relational database model to Atlas. That is something that has been going on consistently for some time now, but it is more of a focus for MDB in an environment in which concerns abound that it will be more difficult to secure required approvals for projects to launch new workloads. Last year, Mongo introduced what it calls its Relational Migrator which includes an enhanced user interface and what is described as a data synch engine. The version of Migrator that customers can directly use is scheduled for availability later this year.
Last quarter the CEO indicated that a growth focus for MDB has been its search capability. Like most other IT vendors, one of the ingredients to sustained high growth is to offer users solutions in adjacencies. In the case of Mongo, one of the adjacencies of choice currently is search. There are, of course, other adjacencies one of which is called Time Series which is a specialized variant of the standard data base solution offered by the company. Time series supports additional use cases, as well.
It is hard as an analyst to know precisely what percentage growth expectations investors have for Mongo these days. Certainly far greater than what is in the published consensus. Mongo growth will probably exceed trends when the current climate of macro headwinds abates.
I am not about to hazard a guess as to exactly when that will be. Anecdotally, I have heard of some suggestions that the deceleration in usage growth has seemingly abated – not reversed, but abated. That said, most observers will be laser-focused on exactly what the cloud hyper-scalers have to say on the subject – although by now, it should be obvious that the exact correlation between hyper-scaler cloud usage and usage for Mongo’s Atlas is nothing like one for one.
What will Mongo’s revenue growth look like in a recovery scenario? Anything I say here has to be considered a guess. I don’t imagine that Mongo can return to 57% growth, either next year, or in the foreseeable future. On the other hand, I would be surprised if Mongo didn’t grow faster than the forecast of industry analysts for market growth of 30%. It is enjoying some success pushing into adjacencies that are not considered in market growth, and the company has been, and is likely to continue to take share in its market. I do think in recovery quarters its percentage revenue growth rate will scrape the 40% level. As long as the company continues its focus on acquiring workloads/use cases, it prospects to remain a hyper growth vendor remain strong.
When commentators write that Mongo shares are overvalued – and there are of course many such commentators – they overlook the likely growth upswing during a recovery and choose instead to focus on the current state of the IT space and its effect on Mongo’s growth. It is simply a backward, as opposed to a forward looking methodology in evaluating high growth shares, and not one that I choose to follow.
Competition
While Mongo may have been a pioneer in developing and popularizing a non-SQL database, there are of course many competitors. All of the hyperscalers are in the market and have been for years now. I have linked here to a 3rd party analysis of competitors; I didn’t find anything in this analysis or others publicly available that was particularly useful in evaluating how product features were impacting competition.
I am not going to try to address all of the functional advantages offered by Mongo. For as long as I have followed the database market it has been characterized by an avalanche of claims with regards to features and functions, and that remains the case. In most cases, trying to evaluate all of the claims will not result in a better investment conclusion. I certainly do not purport to be some kind of expert on the technology of the database market; I know just enough to conclude that Mongo has a desirable set of offerings that allow it to win over the key developer community.
The reason Mongo has been increasing its market share is the same one that has been in evidence for years; developers prefer to use Mongo and they continue to be the major proponents of Mongo in its largest accounts. Over time, and in many cases, the influence of developers often leads to Mongo becoming a standard, but this is a lengthy and involved process. It would appear that despite macro headwinds this paradigm is still in evidence.
Mongo also competes with some very specialized point products such as companies who offer a database exclusively for a particular workload such as that of time series referenced above. In the current environment, there is some vendor consolidation occurring, and Mongo has been a beneficiary of that trend. The company maintains that something similar has been occurring with regards to the company’s search functionality, where developers use the Mongo platform to enable an application built on the company’s database and search functionality. It is not something that the company has quantified, and I am not altogether sure that it can be quantified, but it is another factor that leads me to conclude that Mongo will be able to grow at rates above the market in a less hostile macro environment.
Mongo has an offering in the space known as serverless data bases. This is just now becoming a mainstream technology, but will almost inevitably become a greater proportion of total database installations because of its advantages in terms of cost and architectural factors. Mongo is likely to enjoy competitive success here, and it is part of the company’s evolution to address the many corners of the database market.
It has been said by some that Mongo shares are primarily held by its community of developers. While that is probably apocryphal – 93% of Mongo shares are held by institutions and another 3.5% of the shares are held by insiders leaving very little to be held by individual investors. On the other hand, Mongo’s developers are an enthusiastic and vocal community and the company caters to them, and this paradigm has become a virtuous circle, and is likely to remain so, given management’s priorities.
Mongo’s Profitability Pivot – Is it fast enough?
What was sound corporate strategy a year ago simply will not satisfy investors in the current environment. Mongo is profitable… barely, and it is generating a bit of cash. The question relates to the planned cadence of profitability improvement. The company is forecasting a 100 bps improvement in non-GAAP operating margins this year. That rate of improvement is, no doubt, a sticking point for some investors, and is certainly less than the profitability improvements being forecast by most other high growth IT vendors.
Overall, while Mongo has been able to avoid layoffs, it is reducing its hiring and the overall growth of its opex. Part of the issue that is embedded in the conservative cadence of opex margin improvement that has been forecast were a couple of factors that helped results in the last fiscal year, and which are unlikely to recur this year. These are most notably the stronger growth in Enterprise Advanced, which has a front-loaded revenue recognition model, and the revenue recognized in Q4 from unused contractual commitments. In turn, this has led to the company forecasting high teens revenue growth, which constrains the cadence of operating margin improvement. Overall, Mongo is forecasting opex expense growth in the low-mid teens range which includes merit increases and some selective hiring.
Many IT vendors are facing the vexing question how to size their business during this time of macro headwinds, while ensuring their choices to not hobble growth in a recovery. I am not sure that there is a single right answer to that kind of Hobson’s choice. Writing about Mongo and its shares at the end of April, I imagine that most stakeholders and potential shareholders are aware of the company’s decisions, and this less than average increase in non-GAAP operating margins is already baked into the valuation.
Mongo uses stock based compensation. Last quarter, stock based compensation was 27% of revenues compared to 26% of revenues the prior year. I look at dilution as the actual cost of stock based comp as opposed to the results of using the Black Scholes formula. Dilution last quarter was 0.87%. I have used slightly more than 3.5% dilution for the current year in calculating valuation. Given the expected decrease in hiring rates this year, I expect that SBC expense ratios will decline, although it takes longer for that to feed through into weighted average shares.
Gross margins were 73% of revenue last quarter compared to 69% of revenues the prior year. Mongo’s non-GAAP gross margins have improved, but are still being constrained by the rapid growth of Atlas which has an entirely ratable revenue recognition model. Last quarter, as mentioned, gross margins were above trend, because of the recognition of previously unutilized contractual usage commitments.
Last quarter did see some opex ratio improvements, but this business model has a long way to go before it reaches levels that most would find reasonable. Sales and marketing costs were about 42% of revenue last quarter on a non-GAAP basis compared to 44% of revenue in the prior year period. Research and development expense was 19% of revenue last quarter, compared to 19.5% of revenue a year ago, and general and administrative expense was about 8.4% of revenue compared to 9% of revenue the prior year. Overall, non-GAAP opex was 70% of revenue in this latest quarter, significantly less than the 76% of revenue reported in the year earlier period. About half of the improvement in the ratios was apparently a result of the revenue recognition of the previously unbilled commitments.
The company’s free cash flow can be influenced by the level deferred revenue generated by its Enterprise Advanced non-cloud offering. Over time, as Atlas becomes even more the dominant revenue source, changes in deferred revenue will diminish as a driver of free cash flow. The increase in deferred revenues fell last quarter from the year earlier level which was something of a record for Enterprise Advanced bookings/renewals. Last quarter, free cash flow rose about 15% year on year; free cash flow growth was constrained because of a substantial rise in receivables. The company’s full year free cash flow margin was negative I expect free cash flow margins to be in the range of 5%-6% this year as it should track the improvement in non-GAAP operating margins and, in addition, is not likely to see a continued increase in receivables.
Wrapping Up – Mongo’s valuation and the case to own the shares
I started this article by saying it isn’t about macro trends. That said, it is worth noting that the usage issue, and cloud optimization has been somewhat resolved with Microsoft’s earnings release and guidance, and to a lesser extent by the release of Google’s numbers showing sustained growth of GCP revenues. I wrote earlier in the article about anecdotal data points that suggested that the most aggressive cloud optimization effects were beginning to abate – not reverse – but abate. And so it now seems to be confirmed by 2 of the 3 large public cloud vendors.
At this point, Mongo’s EV/S ratio of just greater than 9.4X (based on the closing price of 4/25) is above average for its growth cohort. But that is essentially a measure of macro headwinds, and a couple of factors unique to MDB’s revenue. If cloud optimization is now at a peak, then growth estimates for Mongo are far too low for its FY ’25 year. It is going to be one of the single most significant beneficiaries of a return to less painful conditions in the enterprise IT space as its usage model will drive substantial upside to what the current published consensus suggests. While the company’s current profitability forecast and free cash flow generation are certainly not at a level that investors find acceptable, trends in profitability and cash flow generation are highly correlated with the trajectory of revenue growth.
In the article I focused on why the company is winning, and why it ought to continue to grow more rapidly than its core market, that of NoSQL databases. I think understanding the company’s workload acquisition strategy is a key tenet in supporting my buy recommendation for the shares. While Mongo has competition, it has continued to increase its market share in its core market, and to grow even more rapidly by starting to sell solutions in adjacencies such as search. And I reiterated my belief that Mongo’s focus on the developer community was resonating in the market.
I have made the point several times that Mongo shares are unlikely to perform well consistently until investor sentiment pivots to a more risk-on mode. And I make no forecast as to when that pivot really happens; while I would like to believe that the results of Microsoft, and to an extent Google might ultimately impact investor sentiment, I suspect that there is still lots of fear, uncertainty and doubt to overcome.
Mongo should be on a short list for investors willing to look across the chasm and to invest in an IT recovery. It is not for the risk adverse, or for those focused on stock based compensation as reported. While investing in something seemingly as mundane as data base technology doesn’t have the pizzazz of investing in the generative AI space, there is likely some correlation between Mongo workloads, and applications built using generative AI. But not enough to make that a pillar of the investment case.
I am willing to look across the chasm, knowing that despite the quarterly reports of two hyper-scalers there are still macro headwinds. It is on that basis that I believe that Mongo will generate positive alpha over the next year, and that the current valuation provides investors with an attractive entry point.
MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ
Many approaches to software architecture assume that the architecture is planned at the beginning. Unfortunately, architecture planned in this way is hard to change later. Functional programming can help achieve loose coupling to the point that advance planning can be kept to a minimum, and architectural decisions can be changed later.
Michael Sperber spoke about software architecture and functional programming at OOP 2023 Digital.
Sperber gave the example of dividing up the system’s code among its building blocks. This is a particularly important kind of architectural decision to work on different building blocks separately, possibly with different teams. One way to do this is to use Domain-Driven Design (DDD) for the coarse-grain building blocks – bounded contexts:
DDD says you should identify bounded contexts via context mapping – at the beginning. However, if you get the boundaries between the contexts wrong, you lose a lot of the benefits. And you will get them wrong, at least slightly – and then it’s hard to move them later.
According to Sperber, functional programming enables late architecture and reduces coupling compared to OOP. In order to defer macroarchitecture decisions, we must always decouple, Sperber argued. Components in functional programming are essentially just data types and functions, and these functions work without mutable state, he said. This makes dependencies explicit and coupling significantly looser than with typical OO components. This in turn enables us to build functionality that is independent of the macroarchitecture, Sperber said.
Sperber made clear that functional programming isn’t “just like OOP only without mutable state”. It comes with its own methods and culture for domain modelling, abstraction, and software construction. You can get some of the benefits just by adopting immutability in your OO project. To get all of them, you need to dive deeper, and use a proper functional language, as Sperber explained:
Functional architecture makes extensive use of advanced abstraction, to implement reusable components, and, more importantly, supple domain models that anticipate the future. In exploring and developing these domain models, functional programmers frequently make use of the rich vocabulary provided by mathematics. The resulting abstractions are fundamentally enabled by the advanced abstraction facilities offered by functional languages.
InfoQ interviewed Michael Sperber about how our current toolbox of architectural techniques predisposes us to bad decisions that are hard to undo later, and what to do about this problem.
InfoQ: What are the challenges of defining the macroarchitecture at the start of a project?
Michael Sperber: A popular definition of software architecture is that it’s the decisions that are hard to change later. Doing this at the beginning means doing it when you have the least information. Consequently, there’s a good chance the decisions are wrong.
InfoQ: What makes it so hard to move boundaries between contexts?
Sperber: It seems in the architecture community we have forgotten how to achieve modularity within a bounded context or a monolith, which is why there’s this new term “modulith”, implying that a regular monolith is non-modular by default and that its internals are tightly coupled.
InfoQ: So you’re saying we don’t know how to achieve loose coupling within a monolith?
Sperber: Yes. This is because the foundation of OO architecture is programming with mutable state i.e. changing your objects in place. These state changes make for invisible dependencies that are hard to see and that tangle up your building blocks. This does not just affect the functional aspects of a project, but also other quality goals.
InfoQ: Can you give an example?
Sperber: Let’s say you choose parallelism as a tactic to achieve high performance: You need to choose aggregate roots, and protect access to those roots with mutual exclusion. This is tedious work, error-prone, hard to make fast, and increases coupling dramatically.
InfoQ: What’s your advice to architects and developers if they want to improve the way that they take architectural decisions?
Sperber: Even if you can’t use a functional language in your project, play with the basics of functional programming to get a feel for the differences and opportunities there. If you’re new to FP, I recommend the How to Design Programs approach to get you started – or DeinProgramm for German speakers.
There are also two books on software construction with functional programming:
Article: Respect. Support. Connect. The Manager’s Role in Building a Great Remote Team
MMS • Kinga Witko
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- Mindfulness at work is not just about individual benefits, it is also about creating a more mindful society and working environment. When we practice mindfulness, we are better equipped to solve the challenges that face us as an industry.
- Remote working comes with a unique set of challenges that can make it difficult for some people to adjust – one of the best daily routines is a virtual coffee – a short daily meeting open for everybody – but not mandatory. You can come, bring coffee or food and just hang around with other people. In such a friendly setup it’s easier to find out that we like similar things, read good books and are fun to work with.
- It’s important to make a virtual office accessible for everybody- from the right tech gear, like laptops or tablets with good internet connections, to ground rules on how to interact efficiently within a team.
- In the workplace, it is important to understand and respect the different personality types of our colleagues. We may not always understand or agree with their approach, but it is important to recognize that everyone has their preferences, strengths, and weaknesses.
- The times when a boss dictates what to do and how to do it are gone. Modern companies require working together with a team and guiding people on their development paths.
The industry is changing, and our perception is changing. We have people working in different time zones; we build diverse teams.
As managers, we also face challenges in terms of needs, accessibility, gender, nationalities, and other conditions that influence our teams and working environments. We cannot build projects based on Excel sheets only, not considering peoples’ preferences and options for personal growth. We need to see real people – even if we meet them in a virtual working environment only.
Remote working challenges
The past three years have brought us to a different reality, in which we can easily choose the most productive place, suitable working hours, and remote employers. Most of us no longer have to stay 8 – 9 hours at the office, but can provide the service from our own homes.
Remote working comes with a unique set of challenges that can make it difficult for some people to adjust.
It doesn’t have to be “work from home” per se, but also connecting from different offices, time zones, or countries, living the life as a digital nomad, or being able to work from anywhere when life circumstances force you to do so.
Unfortunately, it comes with a set of challenges that employees face daily in their professional environment. The most common are:
- Isolation – remote workers may feel isolated and disconnected from their colleagues, which can affect their motivation and job satisfaction.
- Communication – different time zones, languages, tools, and habits.
- Distractors – such as family members needing attention, construction work around, loud noises, etc. – I don’t claim that it is easy to focus in the office open space – but it is easier to separate the private from the professional while working.
When the pandemic started, my colleagues from different countries around the world began to work from their homes, and we found it really attractive to compare our background noises. I heard some exotic animals, music that was new to me, the sound of never-ending traffic horns, and crying babies. On the other side, my “listeners” started to recognize my cat’s noises, as he usually would start to meow incredibly loud just after I would launch a meeting.
Online meeting challenges
My team and I, all spread around the globe, wanted to be connected with the team in real-time. On a daily basis, some of the teammates just exchanged asynchronous messages on chat. They knew one another from their picture in Teams and have never had the opportunity to talk.
Have you ever tried to do a retrospective meeting with people located in four different time zones? I had this crazy idea of waking some of them up at 3 AM my time and not letting some of them fall asleep at 10 PM their time. We had one common session with food and fun and it helped us to shape a real team, not just co-workers.
One of my favourite daily routines is a virtual coffee – a short daily meeting open for everybody – but not mandatory. You can come, bring coffee or food and just hang around with other people. In such a friendly setup it’s easier to find out that we like similar things, read good books and are fun to work with.
It is a completely different situation when you meet somebody in person and then you cooperate with her/him remotely. If you only know somebody from Teams or Zoom, it is extremely hard to create a bond and trust.
What is most fun about this? Did you notice that people in Teams are the same height? When you then meet somebody in person, it might be quite a shocking experience. I had a chance to meet my leader after a couple of months of working together online only. In Teams we were talking face-to-face and eye-to-eye, when in real life it turned out that he was almost 50 cm taller than I am, and he needed to sit to have a normal conversation with me. Funny, isn’t it?
When you lead a team, it is not enough to get information from the team – it is also your role to connect them with one another. I must say, this required stepping out of our comfort zones (and sometimes bed early in the morning) from the entire group, but in the end, they didn’t mind.
Making the virtual office accessible to every team member
It is important to set ground rules for communication and make everybody aware of them.
- Set a time for the everyday meetings (daily or synchro) that suits everyone. If it’s not possible, give people at least a chance to meet online from time-to-time. Be mindful when it comes to time zones.
- If you go for lunch or you are not available – mark it in your calendar or communicator – be transparent and require the same from other people in the project.
- Use separate communication channels to separate daily business from chatting. It’s important to have space for both, but not to mix them.
- Make it clear how the team communicates – whether it’s just Teams/Zoom/Slack, or you use emails as well – what is the preferred response time? And check if everybody is okay with that.
To make the virtual office accessible to every team member, there are a few things you can do:
Make sure everyone’s got the right tech gear, like laptops or tablets with good internet connections, so they can connect to the virtual office from wherever they are. Next, make sure everyone knows how to use the virtual office software, like Zoom or Slack or whatever you’re using. Maybe run some training sessions or create some tutorials.
Moreover, be open to different communication styles – some folks might prefer video calls, while others might prefer messaging.
In one of my projects, I had a developer who never showed his face in the meetings with the group. He used an avatar and was quite a mysterious person to the rest of the team. On the other hand, in one-on-one sessions, I was able to see him live. Some people might think it’s weird, but he probably had reasons behind it, so we respected it.
Lastly, make sure everyone knows they can reach out for help if they’re having any issues connecting to or using the virtual office. Accessibility is all about making sure everyone’s included, so do what you can to make sure everyone feels like they’re part of the team, whether they’re in the same room or on the other side of the world! My favourite tip: when at least one person is not in the same room, everybody connects from their desktops – not from the conference room. It improves sound quality and allows participation in the discussion on the same terms as everyone else.
Mindfulness at work
Mindfulness is often described as the practice of purposely bringing one’s attention to the present-moment experience without evaluation. This simple definition, however, belies the profound impact that mindfulness can have on our lives.
Research has shown that being mindful can bring a wide range of benefits, from reducing stress and anxiety to improving focus and memory. It can also help us to build better relationships, as it enables us to be more attuned to the needs and perspectives of people around us.
On the other hand, mindfulness is not just about individual benefits, it is also about creating a more mindful society and working environment. When we practice mindfulness, we become more compassionate and understanding, and we are better equipped to solve the challenges that face us as an industry.
To practice mindfulness, you can start small, as we did in my team. For example, we exchange some yoga practices that help us be in better connection with our bodies. We also stay close to one another, we check on others’ feelings and moods and don’t cross our boundaries. It all makes us a team – not a group of co-workers.
Personality types help to better understand and respect people
We all come from different backgrounds, have different experiences, and possess unique personalities, and it is these differences that make us who we are.
In the workplace, it is important to understand and respect the different personality types of our colleagues. We may not always understand or agree with their approach, but it is important to recognize that everyone has their preferences, strengths, and weaknesses. By respecting these differences, we can work together more effectively, and create a more harmonious work environment.
Thomas Erikson’s book “Surrounded by Idiots” is a total game-changer when it comes to understanding personality types! The book argues that there are four main personality types – red, blue, yellow, and green – and that knowing someone’s type can help us understand how they think, feel, and behave.
For example, if you know someone’s blue, you might understand that they’re detail-oriented and like to plan things out in advance, while a Yellow might be more spontaneous and enjoy taking risks. This knowledge can help us communicate with others in a way that’s more effective and respectful.
By understanding someone’s personality type, we can also learn to appreciate their strengths and weaknesses. Maybe you’re a red who’s great at taking charge, but not so great at listening to other people’s ideas – but you can respect a yellow’s ability to come up with creative solutions.
I’m almost 100% red and I have the entire package. I ALWAYS do my tasks on time (or even before the deadline), but barely do small talk – just go straight to the business. There are people, who understand my pace and we fantastically get along, there are also some who find my way of working rude. And I respect both. I know how hard it is for me to wait for a detailed analysis from a blue 🙂
How managers build relationships with their teams
Building strong relationships with your team is crucial for any manager, but it’s especially important when you’re working in software testing. As testers usually have the overall view on the project, they need to be in touch with everybody else in the project, not just their own group. This can be stressful both for you and your team members. Here are a few tips for building good relationships:
- Communication is key. Make sure you’re clear and transparent with your team about goals, expectations, and deadlines. Encourage open and honest communication, and be willing to listen to feedback and concerns. It is even more important if you work remotely and know each other only via Teams or Zoom.
- Show appreciation. Let your team know when they’re doing a great job, and celebrate their successes. Take the time to thank them for their hard work, and recognize their contributions.
- Invest in your team’s development. Support your team’s growth and learning, whether that’s through training, conferences, or mentorship. Show that you care about their careers, and want to help them achieve their goals.
- Have fun! Work can be stressful, so make sure you take the time to enjoy each other’s company. Plan team-building activities, celebrate birthdays and milestones, and find ways to inject a bit of fun and laughter into the workday.
From time-to-time I have the opportunity to work with my team in one office. On those days we eat cake, drink coffee together and have lunch. Our space in the office is also decorated with hand-made posters, funny sentences from our chat and made-up certificates. I think people like to work from there and enjoy the good mood that we have created together.
Become an approachable person
The times when a boss dictates what to do and how to do it are gone. Modern companies require working together with a team and guiding people on their development paths. It’s no longer about focusing on KPIs, deadlines, and products, as that becomes the easiest way of losing great people and being the Worst Place To Work.
In such a team composition, a modern manager or a leader has to be approachable. Easier said than done, right?
First, make sure you’re always available for your team. Encourage them to come to you with questions, concerns, or just to chat. My tip is to have a regular slot in your calendar dedicated to your team only.
Second, be friendly and personable. Get to know your team members on a personal level, and show genuine interest in their lives and interests. Of course, you are at work, and not everybody would like to share their personal life details, but being open and caring is what you need to practice every day.
Last but not least – be transparent. Share information about what’s going on in the company, and be honest about any challenges or issues that arise. By being approachable, friendly, and transparent, you can build strong relationships with your team that will help you all succeed.
Global NoSQL Software Market Size, Analysis, Industry Trends, Top Suppliers and COVID …
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
New Jersey, United States – The report offers an in-depth analysis of the Global NoSQL Software Market taking into account market dynamics, segmentation, geographical expansion, competitive landscape, and various other key aspects. The market analysts who have prepared the report have thoroughly studied the Global NoSQL Software market and have offered reliable and accurate data. The report analyses the current trends, growth opportunities, competitive pricing, restraining factors, and boosters that may have an impact on the overall dynamics of the Global NoSQL Software market. The report analytically studies the microeconomic and macroeconomic factors affecting the Global NoSQL Software market growth. New and emerging technologies that may influence the Global NoSQL Software market growth are also being studied in the report.
Both leading and emerging players of the Global NoSQL Software market are comprehensively looked at in the report. The analysts authoring the report deeply studied each and every aspect of the business of key players operating in the Global NoSQL Software market. In the company profiling section, the report offers exhaustive company profiling of all the players covered. The players are studied on the basis of different factors such as market share, growth strategies, new product launch, recent developments, future plans, revenue, gross margin, sales, capacity, production, and product portfolio.
Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=153255
Key Players Mentioned in the Global NoSQL Software Market Research Report:
Amazon, Couchbase, MongoDB Inc., Microsoft, Marklogic, OrientDB, ArangoDB, Redis, CouchDB, DataStax.
Key companies operating in the Global NoSQL Software market are also comprehensively studied in the report. The Global NoSQL Software report offers definite understanding into the vendor landscape and development plans, which are likely to take place in the coming future. This report as a whole will act as an effective tool for the market players to understand the competitive scenario in the Global NoSQL Software market and accordingly plan their strategic activities.
Global NoSQL Software Market Segmentation:
NoSQL Software Market, By Type
• Document Databases
• Key-vale Databases
• Wide-column Store
• Graph Databases
• Others
NoSQL Market, By Application
• Social Networking
• Web Applications
• E-Commerce
• Data Analytics
• Data Storage
• Others
Players can use the report to gain sound understanding of the growth trend of important segments of the Global NoSQL Software market. The report offers separate analysis of product type and application segments of the Global NoSQL Software market. Each segment is studied in great detail to provide a clear and thorough analysis of its market growth, future growth potential, growth rate, growth drivers, and other key factors. The segmental analysis offered in the report will help players to discover rewarding growth pockets of the Global NoSQL Software market and gain a competitive advantage over their opponents.
Key regions including but not limited to North America, Asia Pacific, Europe, and the MEA are exhaustively analyzed based on market size, CAGR, market potential, economic and political factors, regulatory scenarios, and other significant parameters. The regional analysis provided in the report will help market participants to identify lucrative and untapped business opportunities in different regions and countries. It includes a special study on production and production rate, import and export, and consumption in each regional Global NoSQL Software market considered for research. The report also offers detailed analysis of country-level Global NoSQL Software markets.
Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=153255
What to Expect in Our Report?
(1) A complete section of the Global NoSQL Software market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.
(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Software market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.
(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Software market.
(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Software market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.
(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Software Market report.
Key Questions Answered in the Report:
(1) What are the growth opportunities for the new entrants in the Global NoSQL Software industry?
(2) Who are the leading players functioning in the Global NoSQL Software marketplace?
(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Software industry?
(4) What is the competitive situation in the Global NoSQL Software market?
(5) What are the emerging trends that may influence the Global NoSQL Software market growth?
(6) Which product type segment will exhibit high CAGR in future?
(7) Which application segment will grab a handsome share in the Global NoSQL Software industry?
(8) Which region is lucrative for the manufacturers?
For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-software-market/
About Us: Verified Market Research®
Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.
Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.
We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.
Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.
Contact us:
Mr. Edwyne Fernandes
Verified Market Research®
US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768
Email: sales@verifiedmarketresearch.com
Website:- https://www.verifiedmarketresearch.com/
Java News Roundup: New OpenJDK JEPs, Payara Platform, Spring and Tomcat Updates, WildFly 28
MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ
This week’s Java roundup for April 17th, 2023 features news from OpenJDK, JDK 21, JMC 8.3.1, BellSoft, Spring Boot, Spring Security, Spring Session, Spring Authorization Server, Spring Integration, Spring for GraphQL and Spring Shell, WildFly 28, Payara Platform, Open Liberty 23.0.0.4-beta, Micronaut 3.9, Apache Tomcat updates, Ktor 2.3, JHipster Lite 0.32, JBang 0.106.3 and Gradle 8.1.1.
OpenJDK
JEP 446, Scoped Values (Preview), has been promoted from its JEP Draft 8304357 to Candidate status. Formerly known as Extent-Local Variables (Incubator), this JEP is now a preview feature following JEP 429, Scoped Values (Incubator), delivered in JDK 20. This JEP proposes to enable sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads.
JEP 447, Statements before super(), has been promoted from its JEP Draft 8300786 to Candidate status. This JEP, under the auspices of Project Amber, proposes to: allow statements that do not reference an instance being created to appear before the this()
or super()
calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Gavin Bierman, consulting member of technical staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback.
JEP 448, Vector API (Sixth Incubator), has been promoted from its JEP Draft 8305868 to Candidate status. This JEP, under the auspices of Project Panama, incorporates enhancements in response to feedback from the previous five rounds of incubation: JEP 438, Vector API (Fifth Incubator), delivered in JDK 20; JEP 426, Vector API (Fourth Incubator), delivered in JDK 19; JEP 417, Vector API (Third Incubator), delivered in JDK 18; JEP 414, Vector API (Second Incubator), delivered in JDK 17; and JEP 338, Vector API (Incubator), delivered as an incubator module in JDK 16. This feature proposes to enhance the Vector API to load and store vectors to and from a MemorySegment
as defined by JEP 424, Foreign Function & Memory API (Preview).
JEP 449, Deprecate the Windows 32-bit x86 Port for Removal, has been promoted from its JEP Draft 8303167 to Candidate status. This feature JEP, introduced by George Adams, Senior Program Manager at Microsoft, proposes to deprecate the Windows x86-32 port with the intent to remove it in a future release. With no intent to implement JEP 436, Virtual Threads (Second Preview), in 32-bit platforms, removing support for this port will enable OpenJDK developers to accelerate development of new features.
JEP Draft 8305968, Integrity and Strong Encapsulation, and JEP Draft 8306275, Disallow the Dynamic Loading of Agents by Default, have been submitted by Ron Pressler, architect and technical lead for Project Loom at Oracle.
Integrity and Strong Encapsulation proposes to assure the integrity of code and data with a variety of features, such as strong encapsulation, that are enabled by default. Goals of this draft include: allow the Java platform to robustly maintain invariants required for maintainability, security and performance; and differentiate use cases where breaking encapsulation is convenient from use cases where disabling encapsulation is essential.
Disallow the Dynamic Loading of Agents by Default, following the approach of Integrity and Strong Encapsulation, proposes to disallow the dynamic loading of agents into a running JVM by default. Goals of this draft include: reassess the balance between serviceability and integrity; and ensure that a majority of tools, which do not need to dynamically load agents, are unaffected.
JDK Mission Control (JMC) 8.3.1 has been released with notable fixes such as: unable to open JMX Console after installing plugins in macOS and Linux; unable to edit Eclipse project run configurations after installing JMC plugins on Linux; and unable to perform flight recording on jLinked applications; More details on this release may be found in the release notes.
JDK 21
Build 19 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 18 that include fixes to various issues. Further details on this build may be found in the release notes.
For JDK 21, developers are encouraged to report bugs via the Java Bug Database.
JDK 20
JDK 20.0.1, the first maintenance release of JDK 20, along with security updates for JDK 17.0.7, JDK 11.0.19 and JDK 8u371 were made available as part of Oracle’s Releases Critical Patch Update for April 2023.
BellSoft
Also concurrent with Oracle’s Critical Patch Update (CPU) for April 2023, BellSoft has released CPU patches for versions 17.0.6.0.1, 11.0.18.0.1 and 8u371 of Liberica JDK, their downstream distribution of OpenJDK. In addition, Patch Set Update (PSU) versions 20.0.1, 17.0.7, 11.0.19 and 8u372, containing CPU and non-critical fixes, have also been released.
Spring Framework
The first release candidate of Spring Boot 3.1.0 delivers notable new features: improved Testcontainers support including support at development time; support for Docker Compose; enhancements to SSL configuration; and improvements for building Docker images. More details on this release may be found in the release notes.
The release of Spring Boot 3.0.6 primarily addresses CVE-2023-20873, Security Bypass With Wildcard Pattern Matching on Cloud Foundry, a vulnerability in which an application that is deployed to Spring Cloud for Cloud Foundry could be susceptible to a security bypass. Along with improvements in documentation and dependency upgrades, this release also provides notable bug fixes such as: integration of Spring Cloud for Cloud Foundry does not use endpoint path mappings; the ApplicationAvailability
bean is auto-configured even if a custom one already exists; and default configuration substitutions in Apache Cassandra don’t resolve against configuration derived from the spring.data.cassandra
properties file. More details on this release may be found in the release notes.
Similarly, the release of Spring Boot 2.7.11 also addresses the aforementioned CVE-2023-20873 and provides improvements in documentation, dependency upgrades and the same bug fixes as Spring Boot 3.0.6. More details on this release may be found in the release notes.
Versions 6.1.0-RC1, 6.0.3, 5.8.3 and 5.7.8 of Spring Security have been released to primarily address CVE-2023-20862, Empty SecurityContext Is Not Properly Saved Upon Logout, a vulnerability in which serialized versions of logout does not: properly clean the security context; and unable to explicitly save an empty security context to the HttpSessionSecurityContextRepository
class. This results in users still being authenticated even after logout. More details on these releases may be found in the release notes for version 6.1.0-RC1, version 6.0.3, version 5.8.3 and version 5.7.8.
The first release candidate of Spring Session 3.1.0 delivers dependency upgrades and a new feature in which an instance of the StringRedisSerializer
class is reused to eliminate the need to instantiate additional serializer instances. More details on this release may be found in the release notes.
The first release candidate of Spring Authorization Server 1.1.0 provides dependency upgrades and new features such as: support for device code and user code in the JdbcOAuth2AuthorizationService
class; improvements in the OAuth 2.0 Device Authorization Grant that include adding tests and reference documentation; and improvements in the OpenID Connect 1.0 logout endpoint. More details on this release may be found in the release notes.
Similarly, versions 1.0.2 and 0.4.2 of Spring Authorization Server have also been released featuring dependency upgrades and notable bug fixes: return of an incorrect INVALID_CLIENT
token error code to the correct INVALID_GRANT
token error code; a broken support link; the authentication secret should be saved after encoding upon registration of the client; and a consideration that would allow the use of localhost
in redirect URIs. More details on these releases may be found in the release notes for version 1.0.2 and version 0.4.2.
Version 6.1.0-RC1 and 6.0.5 of Spring Integration have been released that share notable changes such as: remove a trailing space in the IntegrationWebSocketContainer
class; and improvements to the BaseWsInboundGatewaySpec
and TailAdapterSpec
classes that didn’t override super methods and threw instances of NullPointerException
due to target
field not populated. More details on these releases may be found in the release notes for version 6.1.0-RC1 and version 6.0.5.
The first release candidate of Spring for GraphQL 1.2.0 delivers new features such as: update the SchemaMappingInspector
class to support Connection
types; support for pagination with Querydsl and Query By Example; and overall support for pagination and sorting. More details on this release may be found in the release notes.
Versions 3.1.0-M2, 3.0.2 and 2.1.8 of Spring Shell have been released featuring shared notable changes such as: builds upon Spring Boot 3.1.0-M2, 3.0.5 and 2.7.10, respectively; a backport of bug fixes; and a significant fix for custom type handling with positional arguments. More details on these releases may be found in the release notes for version 3.1.0-M2, version 3.0.2 and version 2.1.8.
WildFly
Red Hat has released WildFly 28 that ships with improved support for observability and full support for Jakarta EE 10. WildFly has added support for Micrometer and the MicroProfile Telemetry specification, but has removed support for MicroProfile Metrics. JDK 17 is recommended for production applications, but Red Hat has seen good results on JDK 20. More details on this release may be found in the release notes and InfoQ will follow up with a more detailed news story.
Payara
Payara has released their April 2023 edition of the Payara Platform that includes Community Edition 6.2023.4, Enterprise Edition 6.1.0 and Enterprise Edition 5.50.0.
Community Edition 6.2023.4 delivers:a fix for a Payara 6 deployment error with JDK17 and Records; improvements in the SameSite cookie attributes in the Application Deployment Descriptor and a global HTTP network listener; and dependency upgrades to EclipseLink 4.0.1, EclipseLink ASM 9.4.0, Hazelcast 5.2.2 and ASM 9.4. More details on this release may be found in the release notes.
Similarly, Enterprise Edition 6.1.0 features: a fix for a Payara 6 deployment error with JDK17 and Records; improvements in the SameSite cookie attributes in the Application Deployment Descriptor; and dependency upgrades to EclipseLink 4.0.1, EclipseLink ASM 9.4.0, Hazelcast 5.2.2 and ASM 9.4 More details on this release may be found in the release notes.
Enterprise Edition 5.50.0 ships with: a resolution for CVE-2023-24998, a vulnerability in Apache Commons FileUpload in which an attacker can trigger a denial-of-service with malicious uploads due to the number of processed request parts is not limited; a fix for a Hazelcast NoDataMemberInClusterException
; an improvement in the SameSite cookie attribute in the Application Deployment Descriptor; and a dependency upgrade to Hazelcast 5.2.2. More details on this release may be found in the release notes.
Open Liberty
IBM has released Open Liberty 23.0.0.4-beta featuring updated support for the Jakarta Data specification such that developers may now combine multiple ways of specifying ordering and sorting, defining a precedence. Sorting that is defined by the @OrderBy
annotation or a query-by-method keyword is applied first, followed by the parameters from the Sort
record on the method or the Pageable
interface.
Micronaut
The Micronaut Foundation has released Micronaut Framework 3.9.0 that delivers new features such as: the ability to customize a package to write introspection with the targetPackage
field of the @Introspected
annotation; the ability to enable Cross Origin Resource Sharing (CORS) configuration via the @CrossOrigin
annotation; a breaking change in which the configuration property, micronaut.server.cors.*.configurations.allowed-origins
, does not support regular expressions to prevent an accidental exposure of a user’s API; and updates to modules such as: Micronaut Kubernetes, Micronaut Security, Micronaut CRaC, Micronaut Maven and Micronaut Launch. More details on this release may be found in the release notes.
Apache Software Foundation
The Apache Tomcat team has provided point releases for versions 11.0.0-M5, 10.1.8, 9.0.74 and 8.5.88. All four versions share notable changes such as: reduce the default value of maxParameterCount
from 10000 to 1000; correct a regression in the fix for bug 66442 that meant that streams without a response body did not decrement the active stream count when completing, leading to an ERR_HTTP2_SERVER_REFUSED_STREAM
for some connections; implementation of RFC 9239, Updates to ECMAScript Media Types, in which the MIME types for JavaScript has changed to text/javascript
. More details on these releases may be found in the release notes for version 11.0.0-M5, version 10.1.8, version 9.0.74 and version 8.5.88.
Ktor
JetBrains has released version 2.3.0 of Ktor, the asynchronous framework for creating microservices and web applications, that include improvements and fixes such as: support for regular expressions when defining routes; drop support for the legacy JS compiler that will be removed in the upcoming release of Kotlin 1.9.0; support for Apache 5 and Jetty 11; and support for Structured Concurrency for Sockets. More details on this release may be found in the release notes.
JHipster
The JHipster team has released version 0.32.0 of JHipster Lite with many dependency upgrades and notable changes such as: support for Hibernate second-level cache by setting the spring.jpa.properties.hibernate.cache.use_second_level_cache
property to true
; remove an unnecessary warning upon executing the npm run lint
command; and remove an unnecessary stack trace upon running the npm t
command. More details on this release may be found in the release notes.
JBang
The release of JBang 0.106.3 fixes formatting for an issue where ChatGPT errors on bad keys or usage limits.
Gradle
Gradle 8.1.1 has been released that ships with bug fixes: a MethodTooLargeException
when instrumenting a class with significant number of lambdas for the configuration cache; the Kotlin DSL precompiled script plugins built with Gradle 8.1 cannot be used with other versions of Gradle; and Gradle 8.1 configuration of the freeCompilerArgs
method for Kotlin in buildSrc
breaks a build with errors that are not useful. More details on this release may be found in the release notes.
Global NoSQL Database Market to Witness Exponential Rise in Revenue Share … – Coleman News
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
New Jersey, United States – “Global NoSQL Database Market Insight, Forecast 2030” is recently published by verified Market Research. The analysts and researchers have performed primary as well as secondary research on a large scale with the help of various methodologies like Porter’s Five Forces and PESTLE Analysis. Key trends and opportunities that may emerge in the near future have been discussed in the Global NoSQL Database Market Report. A detailed analysis of the factors positively influencing the growth has been done by the professionals. Besides, factors that may act as key challenges for the participants are examined in the Global NoSQL Database Market Report. The Global NoSQL Database research report lays emphasis on the key trends and opportunities that may emerge in the near future and positively impact the overall industry growth. Key drivers that are fuelling the growth are also discussed in the Global NoSQL Database report. Additionally, challenges and restraining factors that are likely to curb growth in the years to come are put forth by the analysts to prepare the manufacturers for future challenges in advance.
In addition, market revenues based on region and country are provided in the Global NoSQL Database report. The authors of the report have also shed light on the common business tactics adopted by players. The leading players of the Global NoSQL Database market and their complete profiles are included in the report. Besides that, investment opportunities, recommendations, and trends that are trending at present in the Global NoSQL Database market are mapped by the report. With the help of this report, the key players of the Global NoSQL Database market will be able to make sound decisions and plan their strategies accordingly to stay ahead of the curve.
Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=129411
Key Players Mentioned in the Global NoSQL Database Market Research Report:
Objectivity Inc, Neo Technology Inc, MongoDB Inc, MarkLogic Corporation, Google LLC, Couchbase Inc, Microsoft Corporation, DataStax Inc, Amazon Web Services Inc & Aerospike Inc.
The competitive landscape is a critical aspect every key player needs to be familiar with. The report throws light on the competitive scenario of the Global NoSQL Database market to know the competition at both the domestic and global levels. Market experts have also offered the outline of every leading player of the Global NoSQL Database market, considering the key aspects such as areas of operation, production, and product portfolio. Additionally, companies in the report are studied based on the key factors such as company size, market share, market growth, revenue, production volume, and profits.
Global NoSQL Database Market Segmentation:
NoSQL Database Market, By Type
• Graph Database
• Column Based Store
• Document Database
• Key-Value Store
NoSQL Database Market, By Application
• Web Apps
• Data Analytics
• Mobile Apps
• Metadata Store
• Cache Memory
• Others
NoSQL Database Market, By Industry Vertical
• Retail
• Gaming
• IT
• Others
Our market analysts are experts in deeply segmenting the Global NoSQL Database market and thoroughly evaluating the growth potential of each and every segment studied in the report. Right at the beginning of the research study, the segments are compared on the basis of consumption and growth rate for a review period of nine years. The segmentation study included in the report offers a brilliant analysis of the Global NoSQL Database market, taking into consideration the market potential of different segments studied. It assists market participants to focus on high-growth areas of the Global NoSQL Database market and plan powerful business tactics to secure a position of strength in the industry.
Global NoSQL Database market research study is incomplete without regional analysis, and we are well aware of it. That is why the report includes a comprehensive and all-inclusive study that solely concentrates on the geographical growth of the Global NoSQL Database market. The study also includes accurate estimations about market growth at the global, regional, and country levels. It empowers you to understand why some regional markets are flourishing while others are seeing a decline in growth. It also allows you to focus on geographies that hold the potential to create lucrative prospects in the near future.
Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=129411
What to Expect in Our Report?
(1) A complete section of the Global NoSQL Database market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.
(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Database market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.
(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Database market.
(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Database market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.
(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Database Market report.
Key Questions Answered in the Report:
(1) What are the growth opportunities for the new entrants in the Global NoSQL Database industry?
(2) Who are the leading players functioning in the Global NoSQL Database marketplace?
(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Database industry?
(4) What is the competitive situation in the Global NoSQL Database market?
(5) What are the emerging trends that may influence the Global NoSQL Database market growth?
(6) Which product type segment will exhibit high CAGR in future?
(7) Which application segment will grab a handsome share in the Global NoSQL Database industry?
(8) Which region is lucrative for the manufacturers?
For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-database-market/
About Us: Verified Market Research®
Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.
Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.
We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.
Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.
Contact us:
Mr. Edwyne Fernandes
Verified Market Research®
US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768
Email: sales@verifiedmarketresearch.com
Website:- https://www.verifiedmarketresearch.com/
MMS • Tomas Neubauer
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
Transcript
Introduction [00:44]
Roland Meertens: Welcome everyone to the InfoQ podcast. My name is Roland Meertens, your host today, and I will be interviewing Tomas Neubauer. He is the CTO and founder of Quix. We are talking to each other in person at the QCon London conference where he gave the presentation simplifying realtime ML pipelines with Quix Streams and Opensource Python library from ML engineers. Make sure to watch his presentation as it delivers tremendous insights and to review time ML pipelines and how to get started with Quix Streams yourself.
During today’s interview, we will dive deeper into the topic of real-time ML. I hope you enjoy it and I hope you can learn something from it.
Tomas. Welcome to the InfoQ podcast.
Tomáš Neubauer:
Thank you for having me.
Roland Meertens: You are giving your talk tomorrow here at QCon London. Can you maybe give a short summary of your talk?
About Quix Streams [01:37]
Tomáš Neubauer: Sure, yeah. So I’m talking about the open-source library Quix Streams. It’s a Python stream processing library for data and workloads on top of Kafka. And I’m talking about how to use this library in projects that involve realtime machine learning. And I’ll talk about the landscape, different architecture designs to solve this problem, pros and cons of each. And then I put this against a real use case, which I, at the end of the presentation, develop on stage from scratch. And in this case, it’s detecting a cyclist crash. So imagine a fitness app running on your handlebars and you crashed and you want to inform your relatives or emergency services.
Roland Meertens: So then you are programming this demo live on stage. Which programming language are you using for this?
Tomáš Neubauer: Yes, I’m using the Opensource library Quix Stream. So I’m using Python and yeah, I’m basically starting with having just data from the app, telemetry data like g-force sensor, GPS-based location, speed, et cetera. And I use a machine learning model that has been trained on history data to detect that the cyclists crashed.
Roland Meertens: And what kind of machine learning model is this?
Tomáš Neubauer: It’s a sensor flow model and we basically train it beforehand, so that’s not done on the stage and we label data correctly and train it in Google Colab. And I’m going to talk about how to get that model from that Colab to production.
What is Real Time ML? [02:40]
Roland Meertens: And so if you’re talking about real-time machine learning, what do you mean with real time? How fast is real time? When can you really say this is real time ML?
Tomáš Neubauer: Well, real time in this case will be five times per second. We will receive telemetry data points from the cyclist. So all of these parameters that I mentioned will be five times per second stream to the cloud. And we will with, I would say 50 milliseconds, delay. Inform either the services or a consuming application that there was a crash. There’s no one hour, one day, one minute delay.
Roland Meertens: Okay. So you get this data from your smart device and you are cutting this up into chunks which are then sent in to your API or to your application?
Tomáš Neubauer: So we’re streaming this data directly through the pipeline without batching anything. So basically it’s coming piece by piece and we are not waiting for anything either. So every 200 milliseconds we do this detection and either say this is not a crash or this is a crash. And in the end of the presentation, I will have a simple front end application with a map and alert because obvious I’m not going to crash a bike on the stage. I’m going to have a similar model that will detect shaking with the phone and I’m going to show everyone that the shake is detected.
Data for Real Time ML [04:19]
Roland Meertens: And where does this come from? How did you get started with this?
Tomáš Neubauer: The roots of this idea, for this Opensource library is coming from my previous job where I was working in McLaren and I was leading a team that was connecting F1 cars to the cloud and therefore to the factory. So people don’t have to travel every second weekend around the world to build real-time decision insight. What I mean by that is basically deciding in a split second that the car needs different tires, different settings for the wing, et cetera. And it was a challenging use case, lots of data around 30 million numbers per minute from each car. And so we couldn’t use any database technology that I’m going to talk about in a presentation and we had to adapt streaming technology. But the biggest problem we faced actually was to get this technology to the hands of our functional team, which were made of mechanical engineers, ML engineers, data scientists. They all use Python and really struggled to use this new tech that we gave them.
Roland Meertens: And how should I see this? So you have this car going over circuits, generating a lot of data, this sends it back to some kind of ground station and then do you have humans making decisions real time or is this also ML models which are making decisions?
Tomáš Neubauer: The way how it works is that in a car, there are sensors that’s collecting data. Some of them are even more than kilohertz or more than a thousand numbers per second, that is streamed over the radio to the garage where there’s a direct connection to the cloud. And then through the cloud infrastructure, it’s being consumed in a factory where people during the week, building new models. And then in a race day there is plenty of screens in the garage where there are dashboards and different waveforms which basically visualizing the result of these models. So the people in the garage can immediately decide that car need something else.
Roland Meertens: And so this is all part of the race strategy where people need to make decisions in split seconds and this needs the data to be available and the models to run in split seconds?
Tomáš Neubauer: Yes, exactly. And basically during my time in Mclaren, we took that know-how from racing and actually applied outside. So at the end we end up doing the same thing for high-speed railway in Singapore where basically we were using machine learning to detect break and suspension deterioration based on the history of data. So there are certain vibration signatures that will lead to a deterioration of the object.
Programming languages for Real Time ML [06:45]
Roland Meertens: And you were talking about different programming languages like either Java or Python. How does this integrate with what you’re working on?
Tomáš Neubauer: Basically, the whole streaming world is traditionally Java-based. Most of the brokers are built in Java or Scala. And as a result, most of the tools around it and most of the libraries and frameworks are built in Java. Although there are some ports and some libraries that let you use these libraries for Python, although there are just a few of them. It’s quite painful because this connection doesn’t really work well and therefore it’s quite difficult for patent people, especially people from data teams to leverage this stack. And as a result, most of the projects really doesn’t work that way. And most of the people work in Jupyter Notebooks and silos and then software engineering taking these models into production.
Roland Meertens: So what do you do to improve this?
Tomáš Neubauer: What I believe is that unless data team work directly on product, it’s never going to work really well because people don’t see the result of their work immediately and they are dependent on other teams. And every time that one team is dependent on another, it just kills innovation and efficiency. So the idea of this is that a data team directly contribute to a product and can test and develop straightaway. So the code doesn’t run in Jupyter Notebook or stays there but actually goes to realtime pipelines and so people can immediately see a result of their work on a physical thing.
Roland Meertens: And you mentioned that there’s different ways people can orchestrate something like this. There’s different ML architectures you could use or you could use for such an approach. Which ones are there?
Tomáš Neubauer: So there’s many options to choose, from all different dimensions that you look at the architecture of building such a system. But one of them is obviously if you’re going to go for batch or streaming. So are you going to use technology like Spark and reactive data in batches or you need a real time system where you need to use something like Kafka or Pulsar or other streaming technologies. And the second thing is how you actually going to use your ML models?
So you can deploy them behind the API or you can actually embed them to a streaming transformation and discuss both pros and cons of each solution.
Roland Meertens: And what do you mean with a streaming transformation?
Tomáš Neubauer: This is a fundamental major concept of what I’m going to talk about, which is a pub and sub service. So basically we are going to subscribe in our model to a topic where we are going to get input data from the phone and we going to output the results. Therefore, is there a crash or no? And this is the major architectural cornerstone of this approach.
The tools needed [09:22]
Roland Meertens: Okay. And you mentioned for example, Kafka and you mentioned some other tools. How does your work then relate to this?
Tomáš Neubauer: Well, what we found out is that Kafka, although it’s powerful, it’s quite difficult to use. So we have built a level abstraction on top of it. Then we found that that’s not enough actually because streaming on itself introduce complexities and different approaches to common problems. I have a nice example of that tomorrow. So we are building abstraction on top of streaming concept as well, which means that you would operate and you would develop your code in Python as it would be in Jupyter Notebook. So what you are used to when you working with a static data would apply to building a streaming transformation.
Roland Meertens: And how do you do this? How can people test this with a pre-recorded stream which they then replay and can you still use a Jupyter Notebook or do you as a machine learning or as a data scientist, do you then use and lose part of your tooling?
Tomáš Neubauer: So the Quix Stream is Opensource library that you can just download from paper and use and connect to your broker. If you don’t have a broker, you can set it up. It’s Opensource software as well. If you don’t want to, you can use our manage broker as well, doesn’t matter, it works the same. And then we have some Opensource simulators of data that you can use if you don’t have your own. So for example, we have F1 simulator which will give you higher solution data, so that’s quite cool. You can also, for example, subscribe to Reddit and get messages on Reddit or you can use the app I’m going to show you tomorrow. It’s also Opensource, so you can install it from up store or possibly you can even clone it and change it to suit your need and deploy by yourself.
Different modalities [11:06]
Roland Meertens: So then Quix handles both text messages but also audio or what kind of data do you handle?
Tomáš Neubauer: Yeah, so we handle time series data, which involves a numerical and string values. Then we handle pioneer data, which is audio and video and geospatial, et cetera. Where we allow developers to just attach this and the column and then we have a metadata. So you don’t have to repeat for example that this bike has a VMware 1.5. You just send it once and the stateful pipeline will persist that information. And then at the end you also can send events. So for example, crash is a good example of event, it doesn’t have any continuous information.
Roland Meertens: Okay. So can you also connect these pipelines such that one pipeline for example gets all the information from your sensor and then sends events to another pipeline? Is this something which is sustainable?
Tomáš Neubauer: Yes. So the whole idea of building systems with this approach is to building pipelines. So each Node in your architecture is a container that connects to one or more input topics and output results to one or more output topics. You create a pipeline that has multiple branches, sometimes they join back together, sometimes they end and when they end they usually either go to database or back to the product. And same is with the stats, they could be from your product or could be CDC from database. So you have multiple sources, multiple destinations, and in the middle you have one or more transformations.
Roland Meertens: And is there some kind of limit to the amount of input or the amount of consumers you have for a pipeline?
Tomáš Neubauer: There isn’t really limit to number of transformations or sources. One thing is that Kafka is designed to be one to one or one to a small number of consumers and producers. So if you have a use case like we going to do today with the phones where you can possibly have thousands or millions users, you need to put some gateway between your devices and Kafka, which we’ll do. And in our case it’ll be a web socket gateway collecting data and then funneling it to topic.
Roland Meertens: Okay. So do you still have some queue in between?
Tomáš Neubauer: There’s really any queue in between, but there’s a queue obviously in Kafka. So as the data flowing to the gateway, they’re being put to the queue in topic and then the services listening to it will just collect, consume and process this data from that queue.
Use cases for Real Time ML [13:33]
Roland Meertens: You already have some consumers who are using this in some creative or interesting ways? What’s the most interesting use cases you’ve seen?
Tomáš Neubauer: Yes, so one really cool use case is from healthcare where there’s sensors on your lung and listening to your breathing and then being sent to the cloud. And machine learning is used to detect different illnesses that you have and that’s all going to the company app. So it’s quite similar to what we are going to do here. Then second quite interesting use cases in a public transport are wifi sensors detecting the occupation of the underground stations and automatically closing opening doors and sending people to a less occupied part of the stations.
Roland Meertens: Oh, interesting. So then you have some signal which tells you how many people are in certain parts of the station?
Tomáš Neubauer: Yes, correct. So you have the realtors all around the station, and then in real time you know that in the north part of the station there is more people than in the south and therefore it will be better if people come from the south and you can do this in a split second.
The implementation [14:33]
Roland Meertens: Oh, interesting. And then in terms of this implementation, if we, for example, want to have some machine learning model act on it, are there specific limitations or specific frameworks you have to use?
Tomáš Neubauer: Basically the beauty of this approach is, and I think that’s why it’s so suited to machine learning, is that it’s just a patent at the end where all the magic happening. So you read data from Kafka into Python, and then in that code you are free to do whatever you want. So that could be using any PIP package out there, you can use the library like open CV for image processing and really anything that is possible in Python, it’s possible with this approach. And then you just output it again with the Python interface. So there’s no black box operation, there’s no domain specific language that you will find in Flink.
Roland Meertens: Do I basically just say, “Whenever you have a new piece of data, call this Python function with these augments?”
Tomáš Neubauer: Correct. And even more than python functions, you can build python classes with all the structure that you are using in Python. You can also try in Jupyter Notebook, so the library will work in a cell in Jupyter Notebook. So again, there’s basically a freedom of deployment in running this code anywhere, it’s just a python.
Roland Meertens: If people are listening and they’re beginners in realtime machine learning, how would you get started? What would you recommend to people?
Tomáš Neubauer: Well, first of all, what I’m doing here today, it’s available as a tutorial, all the codes is Opensource, so you can basically try it by yourself. There are other tutorials that we have published that going to are different use cases and going step by step from literally installing Python, installing Kafka, things like that to get this going from the start. So I recommend to people to go to docs that we have for the library. There are tutorials and there are some concepts described. What is the detail of this? So yeah, that would be the best place to start.
Roland Meertens: Are there specific concepts which are difficult to grasp or is it relatively straightforward?
Tomáš Neubauer: What is really complicated is stateful processing that we are trying to solve and abstract from. But if you are interested to learn more about stateful processing, we have it in the docs explained. That’s a very interesting concept and it will open the intricacy of the stream processing. But I think the goal of the library really is to make it simpler. Obviously, it’s a journey, but I’m confident that we already have done a great job in making it a bit easier than it was.
Roland Meertens: Thank you very much. Thank you for joining the podcast and good luck with your talk tomorrow and hopefully people can watch the recording online.
Tomáš Neubauer: Tthank you for having me
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.
MMS • Robert Krzaczynski
Article originally posted on InfoQ. Visit InfoQ
Recently Microsoft released .NET 8 Preview 3. This new release contains many new improvements to ASP.NET Core such as support for native AOT, server-side rendering with Blazor, rendering Razor components outside of ASP.NET Core, sections support in Blazor or monitoring Blazor Server circuit activity.
In .NET 8 Preview 3, native AOT support for ASP.NET Core was added. Thanks to that it is possible to publish an ASP.NET Core application with native AOT, creating a standalone application that is compiled ahead of time (AOT) into native code. Publishing and deploying a native AOT application can reduce the following things such as disk size, memory demand and startup time.
Microsoft developers launched a simple ASP.NET Core API application to compare the differences in application size, memory usage, runtime and CPU load published with and without native AOT. Publishing the application as a native AOT improves start-up time and application size. In the experiment, start-up time was reduced by 80% and application size by 87%. These and other metrics are available on Microsoft’s public benchmarking dashboard.
However, not all features and libraries in ASP.NET Core are compatible with native AOT. The .NET 8 platform represents the beginning of work to include native AOT in ASP.NET Core, with an initial focus on including application support using Minimal APIs or gRPC, and deployed in cloud-native environments. A table showing the compatibility of ASP.NET Core features with the native AOT is attached in the article with the announcement of ASP.NET Core updates in .NET 8 Preview 3.
This preview version also adds initial support for server-side rendering using Blazor components. This is the start of the Blazor unification work to enable the usage of Blazor components for all web UI needs, client-side and server-side. Blazor components are available for server-side rendering without the need for any .cshtml files. The framework will discover Blazor components with routing support and configure them as endpoints. There are no WebAssembly or WebSocket connections and no necessity to load any JavaScript. Each request is handled separately by the Blazor component for the corresponding endpoint.
The work to enable server-side rendering with Blazor components resulted that it is now possible to render Blazor components outside the context of an HTTP request. Razor components can be rendered as HTML directly into a string or stream regardless of the ASP.NET Core hosting environment. This is helpful in scenarios where you want to generate HTML fragments.
Another point related to Blazor is the addition of the SectionOutlet
and SectionContent
components. They provide support for identifying outlets for content to be filled in later. Sections are often used to define placeholders in layouts that are then populated by specific pages. Sections are referenced by either a unique name or a unique object identifier.
Moreover, it is now an option to monitor inbound circuit activity in Blazor Server applications using the new CreateInboundActivityHandler
method in CircuitHandler
. Inbound circuit activity is any activity sent from the browser to the server, such as user interface events or JavaScript-to-.NET inter-operational calls.
The improvements added to ASP.NET Core received positive feedback from the community. .NET developers left many comments under the release announcement. They appreciated especially the focus on performance and AOT compilation. There was also a question from Ömer Kaya about the availability of Blazor United in .NET 8. Daniel Roth, a principal program manager at Microsoft, answered:
The Blazor United effort is really a collection of features we’re adding to Blazor so that you can get the best of server & client-based web development. These features include: Server-side rendering, streaming rendering, enhanced navigations & form handling, adding client interactivity per page or component, and determining the client render mode at runtime. We’ve started delivering server-side rendering support for Blazor with .NET 8 Preview 3, which is now available to try out. We plan to deliver the remaining features in upcoming previews. We hope to deliver them all for .NET 8, but we’ll see how far we get.
MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ
Now available to developers, the first beta of Android 14 focuses on privacy, security, performance, developer productivity, and user customization. In addition, it improves user experience with large-screen devices on tablets and foldables.
To better protect sensitive data, Android 14 introduces the new accessibilityDataSensitive
attribute. This attribute can be used by apps to enable access to specific data and views only to Google’s and third-party services meant to help users with disabilities.
If an app uses this new attributes, its visibility will be in fact limited to apps that declare the isAccessibilityTool
attribute. Play Protect is the mechanism responsible to scan apps when they are downloaded from the Play Store and make sure they use the isAccessibilityTool
attribute only if they are actually meant to help people with disabilities.
Google says that there are two main use cases where apps can benefit from this new feature: protecting user data from third-party access and preventing critical actions being carried through, such as authorizing a payment using a credit card. The importance of this feature cannot be underestimated, since it brings fully under developer control which data an app considers sensitive and thus protected from general external access.
Additionally, Android 14 beta improves a number of system UI elements, including a new, more prominent back arrow and a customizable share sheet.
Apps can add custom actions to the system share sheet creating a ChooserAction
which will be shown to the user when Intent.EXTRA_CHOOSER_CUSTOM_ACTIONS is invoked. This will have the effect of displaying a separate row of app-specific actions on top of the cross-system action row.
The new share sheet makes it also easier to go back to the invoking app and add new items to those being shared. Finally, the UI has been improved by allowing you to scroll, in case you are sharing a large number of images, and to mix text and images.
Android 14 beta 2 will become available during Google I/O next month, and beta 3 in June. Android beta 4, coming in July, will be the final beta before the official release.
For a full list of all changes in Android 14 beta, do not miss this Twitter thread by Mishal Rahman, co-host of the All About Android show,