Java Gets a Boost with String Templates: Simplifying Code and Improving Security

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

JEP 430, String Templates (Preview), has been promoted from Proposed to Target to Targeted status for JDK 21, a feature JEP type that proposes to enhance the Java programming language with string templates, which are similar to string literals but contain embedded expressions incorporated into the string template at run time.

Java developers can now enhance the language’s string literals and text blocks with string templates that can produce specialized results by coupling literal text with embedded expressions and processors. The aim of this new feature is to simplify the writing of Java programs, improve the readability of expressions that mix text and expressions, and enhance the security of Java programs that compose strings from user-provided values.

This JEP introduces a new kind of expression called a template expression that allows developers to perform string interpolation and compose strings safely and efficiently. Template expressions are programmable and not limited to composing strings. They can turn structured text into any kind of object according to domain-specific rules. In a template expression, a template processor combines the literal text in the template with the values of any embedded expressions at runtime to make a result. Examples are as follows:
 

String name = "Joan";

String info = STR."My name is {name}";
assert info.equals("My name is Joan");   // true

A template expression has a similar syntax to a string literal, but with a prefix. The second line of the above code contains a template expression.

In contrast, usually, String interpolation allows programmers to combine string literals and expressions into a single string, as many programming languages use, providing greater convenience and clarity than traditional string concatenation. However, it can create dangerous strings that may be misinterpreted by other systems, especially when dealing with SQL statements, HTML/XML documents, JSON snippets, shell scripts, and natural-language text. To prevent security vulnerabilities, Java requires developers to validate and sanitize strings with embedded expressions using escape or validate methods.

A safer and more efficient solution would be to introduce a first-class, template-based mechanism for composing strings that automatically applies template-specific rules to the string, resulting in escaped quotes for SQL statements, no illegal entities for HTML documents, and boilerplate-free message localization. This approach relieves developers from the manual task of escaping each embedded expression and validating the entire string. Exactly that’s what is done by the template expression that Java adheres to as opposed to String interpolation used by other popular programming languages.

The design of the template expression makes it impossible to go directly from a string literal or text block with embedded expressions to a string with the expressions’ values interpolated. This is to prevent dangerously incorrect strings from spreading through the program. Instead, a template processor, such as STR, FMT or RAW, processes the string literal, validates the result, and interpolates the values of embedded expressions.

Here are some examples of template expressions that use multiple lines to describe HTML text, JSON text, and a zone form:

String title = "My Web Page";
String text  = "Hello, world";
String html = STR."""
        
          
            {title}
          
          
            

{text}

""";

Which yields the following output:

| """
| 
|   
|     My Web Page
|   
|   
|     

Hello, world

| | | """

Another example is as follows:

String name    = "Joan Smith";
String phone   = "555-123-4567";
String address = "1 Maple Drive, Anytown";
String json = STR."""
    {
        "name":    "{name}",
        "phone":   "{phone}",
        "address": "{address}"
    }
    """;

Similarly, this produces the following output.

| """
| {
|     "name":    "Joan Smith",
|     "phone":   "555-123-4567",
|     "address": "1 Maple Drive, Anytown"
| }
| """

Another example:

record Rectangle(String name, double width, double height) {
    double area() {
        return width * height;
    }
}

Rectangle[] zone = new Rectangle[] {
        new Rectangle("Alfa", 17.8, 31.4),
        new Rectangle("Bravo", 9.6, 12.4),
        new Rectangle("Charlie", 7.1, 11.23),
    };

String form = FMT."""
        Description     Width    Height     Area
        %-12s{zone[0].name}  %7.2f{zone[0].width}  %7.2f{zone[0].height}     %7.2f{zone[0].area()}
        %-12s{zone[1].name}  %7.2f{zone[1].width}  %7.2f{zone[1].height}     %7.2f{zone[1].area()}
        %-12s{zone[2].name}  %7.2f{zone[2].width}  %7.2f{zone[2].height}     %7.2f{zone[2].area()}
        {" ".repeat(28)} Total %7.2f{zone[0].area() + zone[1].area() + zone[2].area()}
          """;

The above code produces the following output:

| """
| Description     Width    Height     Area
| Alfa            17.80    31.40      558.92
| Bravo            9.60    12.40      119.04
| Charlie          7.10    11.23       79.73
|                              Total  757.69
| "

Java provides two template processors for performing string interpolation: STR and FMT. STR replaces each embedded expression in the template with its (stringified) value, while FMT interprets format specifiers that appear to the left of embedded expressions. The format specifiers are the same as those defined in java.util.Formatter. In cases where the unprocessed template is needed, the standard RAW template processor can be used. This processor simply returns the original template without any interpolation or processing.

Furthermore, developers can create their own template processors for use in template expressions. A template processor is an object that provides the functional interface ValidatingProcessor, and its class implements the single abstract method of ValidatingProcessor, which takes a StringTemplate and returns an object. Template processors can be customized to perform validation at runtime and return objects of any type, not just strings.

In conclusion, template expressions in Java make it easy and safe for developers to do string interpolation and string composition.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB: A Top Pick For A Recovery In IT Demand (NASDAQ:MDB) | Seeking Alpha

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

MongoDB headquarters in Silicon Valley

Sundry Photography

How long until usage trends start to move the other way?

Have you ever seen one of those rope bridges strung across a river in a jungle or high across a canyon. Scary as heck, at least for this

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Late Architecture with Functional Programming

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Many approaches to software architecture assume that the architecture is planned at the beginning. Unfortunately, architecture planned in this way is hard to change later. Functional programming can help achieve loose coupling to the point that advance planning can be kept to a minimum, and architectural decisions can be changed later.

Michael Sperber spoke about software architecture and functional programming at OOP 2023 Digital.

Sperber gave the example of dividing up the system’s code among its building blocks. This is a particularly important kind of architectural decision to work on different building blocks separately, possibly with different teams. One way to do this is to use Domain-Driven Design (DDD) for the coarse-grain building blocks – bounded contexts:

DDD says you should identify bounded contexts via context mapping – at the beginning. However, if you get the boundaries between the contexts wrong, you lose a lot of the benefits. And you will get them wrong, at least slightly – and then it’s hard to move them later.

According to Sperber, functional programming enables late architecture and reduces coupling compared to OOP. In order to defer macroarchitecture decisions, we must always decouple, Sperber argued. Components in functional programming are essentially just data types and functions, and these functions work without mutable state, he said. This makes dependencies explicit and coupling significantly looser than with typical OO components. This in turn enables us to build functionality that is independent of the macroarchitecture, Sperber said.

Sperber made clear that functional programming isn’t “just like OOP only without mutable state”. It comes with its own methods and culture for domain modelling, abstraction, and software construction. You can get some of the benefits just by adopting immutability in your OO project. To get all of them, you need to dive deeper, and use a proper functional language, as Sperber explained:

Functional architecture makes extensive use of advanced abstraction, to implement reusable components, and, more importantly, supple domain models that anticipate the future. In exploring and developing these domain models, functional programmers frequently make use of the rich vocabulary provided by mathematics. The resulting abstractions are fundamentally enabled by the advanced abstraction facilities offered by functional languages.

InfoQ interviewed Michael Sperber about how our current toolbox of architectural techniques predisposes us to bad decisions that are hard to undo later, and what to do about this problem.

InfoQ: What are the challenges of defining the macroarchitecture at the start of a project?

Michael Sperber: A popular definition of software architecture is that it’s the decisions that are hard to change later. Doing this at the beginning means doing it when you have the least information. Consequently, there’s a good chance the decisions are wrong.

InfoQ: What makes it so hard to move boundaries between contexts?

Sperber: It seems in the architecture community we have forgotten how to achieve modularity within a bounded context or a monolith, which is why there’s this new term “modulith”, implying that a regular monolith is non-modular by default and that its internals are tightly coupled.

InfoQ: So you’re saying we don’t know how to achieve loose coupling within a monolith?

Sperber: Yes. This is because the foundation of OO architecture is programming with mutable state i.e. changing your objects in place. These state changes make for invisible dependencies that are hard to see and that tangle up your building blocks. This does not just affect the functional aspects of a project, but also other quality goals.

InfoQ: Can you give an example?

Sperber: Let’s say you choose parallelism as a tactic to achieve high performance: You need to choose aggregate roots, and protect access to those roots with mutual exclusion. This is tedious work, error-prone, hard to make fast, and increases coupling dramatically.

InfoQ: What’s your advice to architects and developers if they want to improve the way that they take architectural decisions?

Sperber: Even if you can’t use a functional language in your project, play with the basics of functional programming to get a feel for the differences and opportunities there. If you’re new to FP, I recommend the How to Design Programs approach to get you started – or DeinProgramm for German speakers.

There are also two books on software construction with functional programming:

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Respect. Support. Connect. The Manager’s Role in Building a Great Remote Team

MMS Founder
MMS Kinga Witko

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Mindfulness at work is not just about individual benefits, it is also about creating a more mindful society and working environment. When we practice mindfulness, we are better equipped to solve the challenges that face us as an industry.
  • Remote working comes with a unique set of challenges that can make it difficult for some people to adjust – one of the best daily routines is a virtual coffee – a short daily meeting open for everybody – but not mandatory. You can come, bring coffee or food and just hang around with other people. In such a friendly setup it’s easier to find out that we like similar things, read good books and are fun to work with.
  • It’s important to make a virtual office accessible for everybody- from the right tech gear, like laptops or tablets with good internet connections, to ground rules on how to interact efficiently within a team.
  • In the workplace, it is important to understand and respect the different personality types of our colleagues. We may not always understand or agree with their approach, but it is important to recognize that everyone has their preferences, strengths, and weaknesses.
  • The times when a boss dictates what to do and how to do it are gone. Modern companies require working together with a team and guiding people on their development paths. 

The industry is changing, and our perception is changing. We have people working in different time zones; we build diverse teams.

As managers, we also face challenges in terms of needs, accessibility, gender, nationalities, and other conditions that influence our teams and working environments. We cannot build projects based on Excel sheets only, not considering peoples’ preferences and options for personal growth. We need to see real people – even if we meet them in a virtual working environment only.

Remote working challenges 

The past three years have brought us to a different reality, in which we can easily choose the most productive place, suitable working hours, and remote employers. Most of us no longer have to stay 8 – 9 hours at the office, but can provide the service from our own homes. 

Remote working comes with a unique set of challenges that can make it difficult for some people to adjust.

It doesn’t have to be “work from home” per se, but also connecting from different offices, time zones, or countries, living the life as a digital nomad, or being able to work from anywhere when life circumstances force you to do so.

Unfortunately, it comes with a set of challenges that employees face daily in their professional environment. The most common are:

  • Isolation – remote workers may feel isolated and disconnected from their colleagues, which can affect their motivation and job satisfaction.
  • Communication – different time zones, languages, tools, and habits.
  • Distractors – such as family members needing attention, construction work around, loud noises, etc. – I don’t claim that it is easy to focus in the office open space – but it is easier to separate the private from the professional while working.

When the pandemic started, my colleagues from different countries around the world began to work from their homes, and we found it really attractive to compare our background noises. I heard some exotic animals, music that was new to me, the sound of never-ending traffic horns, and crying babies. On the other side, my “listeners” started to recognize my cat’s noises, as he usually would start to meow incredibly loud just after I would launch a meeting.

Online meeting challenges  

My team and I, all spread around the globe, wanted to be connected with the team in real-time. On a daily basis, some of the teammates just exchanged asynchronous messages on chat. They knew one another from their picture in Teams and have never had the opportunity to talk.

Have you ever tried to do a retrospective meeting with people located in four different time zones? I had this crazy idea of waking some of them up at 3 AM my time and not letting some of them fall asleep at 10 PM their time. We had one common session with food and fun and it helped us to shape a real team, not just co-workers.

One of my favourite daily routines is a virtual coffee – a short daily meeting open for everybody – but not mandatory. You can come, bring coffee or food and just hang around with other people. In such a friendly setup it’s easier to find out that we like similar things, read good books and are fun to work with.

It is a completely different situation when you meet somebody in person and then you cooperate with her/him remotely. If you only know somebody from Teams or Zoom, it is extremely hard to create a bond and trust.

What is most fun about this? Did you notice that people in Teams are the same height? When you then meet somebody in person, it might be quite a shocking experience. I had a chance to meet my leader after a couple of months of working together online only. In Teams we were talking face-to-face and eye-to-eye, when in real life it turned out that he was almost 50 cm taller than I am, and he needed to sit to have a normal conversation with me. Funny, isn’t it?

When you lead a team, it is not enough to get information from the team – it is also your role to connect them with one another.  I must say, this required stepping out of our comfort zones (and sometimes bed early in the morning) from the entire group, but in the end, they didn’t mind.

Making the virtual office accessible to every team member

It is important to set ground rules for communication and make everybody aware of them. 

  • Set a time for the everyday meetings (daily or synchro) that suits everyone. If it’s not possible, give people at least a chance to meet online from time-to-time. Be mindful when it comes to time zones.
  • If you go for lunch or you are not available – mark it in your calendar or communicator – be transparent and require the same from other people in the project.
  • Use separate communication channels to separate daily business from chatting. It’s important to have space for both, but not to mix them.
  • Make it clear how the team communicates – whether it’s just Teams/Zoom/Slack, or you use emails as well – what is the preferred response time? And check if everybody is okay with that.

To make the virtual office accessible to every team member, there are a few things you can do:   

Make sure everyone’s got the right tech gear, like laptops or tablets with good internet connections, so they can connect to the virtual office from wherever they are. Next, make sure everyone knows how to use the virtual office software, like Zoom or Slack or whatever you’re using. Maybe run some training sessions or create some tutorials. 

Moreover, be open to different communication styles – some folks might prefer video calls, while others might prefer messaging.

In one of my projects, I had a developer who never showed his face in the meetings with the group. He used an avatar and was quite a mysterious person to the rest of the team. On the other hand, in one-on-one sessions, I was able to see him live. Some people might think it’s weird, but he probably had reasons behind it, so we respected it.     

Lastly, make sure everyone knows they can reach out for help if they’re having any issues connecting to or using the virtual office. Accessibility is all about making sure everyone’s included, so do what you can to make sure everyone feels like they’re part of the team, whether they’re in the same room or on the other side of the world! My favourite tip: when at least one person is not in the same room, everybody connects from their desktops – not from the conference room. It improves sound quality and allows participation in the discussion on the same terms as everyone else.

Mindfulness at work

Mindfulness is often described as the practice of purposely bringing one’s attention to the present-moment experience without evaluation. This simple definition, however, belies the profound impact that mindfulness can have on our lives.

Research has shown that being mindful can bring a wide range of benefits, from reducing stress and anxiety to improving focus and memory. It can also help us to build better relationships, as it enables us to be more attuned to the needs and perspectives of people around us.

On the other hand, mindfulness is not just about individual benefits, it is also about creating a more mindful society and working environment. When we practice mindfulness, we become more compassionate and understanding, and we are better equipped to solve the challenges that face us as an industry.

To practice mindfulness, you can start small, as we did in my team. For example, we exchange some yoga practices that help us be in better connection with our bodies. We also stay close to one another, we check on others’ feelings and moods and don’t cross our boundaries. It all makes us a team – not a group of co-workers.

Personality types help to better understand and respect people

We all come from different backgrounds, have different experiences, and possess unique personalities, and it is these differences that make us who we are.

In the workplace, it is important to understand and respect the different personality types of our colleagues. We may not always understand or agree with their approach, but it is important to recognize that everyone has their preferences, strengths, and weaknesses. By respecting these differences, we can work together more effectively, and create a more harmonious work environment.

Thomas Erikson’s book “Surrounded by Idiots” is a total game-changer when it comes to understanding personality types! The book argues that there are four main personality types – red, blue, yellow, and green – and that knowing someone’s type can help us understand how they think, feel, and behave.

For example, if you know someone’s blue, you might understand that they’re detail-oriented and like to plan things out in advance, while a Yellow might be more spontaneous and enjoy taking risks. This knowledge can help us communicate with others in a way that’s more effective and respectful.

By understanding someone’s personality type, we can also learn to appreciate their strengths and weaknesses. Maybe you’re a red who’s great at taking charge, but not so great at listening to other people’s ideas – but you can respect a yellow’s ability to come up with creative solutions.

I’m almost 100% red and I have the entire package. I ALWAYS do my tasks on time (or even before the deadline), but barely do small talk – just go straight to the business. There are people, who understand my pace and we fantastically get along, there are also some who find my way of working rude. And I respect both. I know how hard it is for me to wait for a detailed analysis from a blue 🙂

How managers build relationships with their teams  

Building strong relationships with your team is crucial for any manager, but it’s especially important when you’re working in software testing. As testers usually have the overall view on the project, they need to be in touch with everybody else in the project, not just their own group. This can be stressful both for you and your team members. Here are a few tips for building good relationships:

  1. Communication is key. Make sure you’re clear and transparent with your team about goals, expectations, and deadlines. Encourage open and honest communication, and be willing to listen to feedback and concerns. It is even more important if you work remotely and know each other only via Teams or Zoom.
  2. Show appreciation. Let your team know when they’re doing a great job, and celebrate their successes. Take the time to thank them for their hard work, and recognize their contributions.
  3. Invest in your team’s development. Support your team’s growth and learning, whether that’s through training, conferences, or mentorship. Show that you care about their careers, and want to help them achieve their goals.
  4. Have fun! Work can be stressful, so make sure you take the time to enjoy each other’s company. Plan team-building activities, celebrate birthdays and milestones, and find ways to inject a bit of fun and laughter into the workday.

From time-to-time I have the opportunity to work with my team in one office. On those days we eat cake, drink coffee together and have lunch. Our space in the office is also decorated with hand-made posters, funny sentences from our chat and made-up certificates. I think people like to work from there and enjoy the good mood that we have created together.

Become an approachable person

The times when a boss dictates what to do and how to do it are gone. Modern companies require working together with a team and guiding people on their development paths. It’s no longer about focusing on KPIs, deadlines, and products, as that becomes the easiest way of losing great people and being the Worst Place To Work.

In such a team composition, a modern manager or a leader has to be approachable. Easier said than done, right?

First, make sure you’re always available for your team. Encourage them to come to you with questions, concerns, or just to chat. My tip is to have a regular slot in your calendar dedicated to your team only.

Second, be friendly and personable. Get to know your team members on a personal level, and show genuine interest in their lives and interests. Of course, you are at work, and not everybody would like to share their personal life details, but being open and caring is what you need to practice every day.

Last but not least – be transparent. Share information about what’s going on in the company, and be honest about any challenges or issues that arise. By being approachable, friendly, and transparent, you can build strong relationships with your team that will help you all succeed.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Global NoSQL Software Market Size, Analysis, Industry Trends, Top Suppliers and COVID …

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

New Jersey, United States – The report offers an in-depth analysis of the Global NoSQL Software Market taking into account market dynamics, segmentation, geographical expansion, competitive landscape, and various other key aspects. The market analysts who have prepared the report have thoroughly studied the Global NoSQL Software market and have offered reliable and accurate data. The report analyses the current trends, growth opportunities, competitive pricing, restraining factors, and boosters that may have an impact on the overall dynamics of the Global NoSQL Software market. The report analytically studies the microeconomic and macroeconomic factors affecting the Global NoSQL Software market growth. New and emerging technologies that may influence the Global NoSQL Software market growth are also being studied in the report.

Both leading and emerging players of the Global NoSQL Software market are comprehensively looked at in the report. The analysts authoring the report deeply studied each and every aspect of the business of key players operating in the Global NoSQL Software market. In the company profiling section, the report offers exhaustive company profiling of all the players covered. The players are studied on the basis of different factors such as market share, growth strategies, new product launch, recent developments, future plans, revenue, gross margin, sales, capacity, production, and product portfolio.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=153255

Key Players Mentioned in the Global NoSQL Software Market Research Report:

Amazon, Couchbase, MongoDB Inc., Microsoft, Marklogic, OrientDB, ArangoDB, Redis, CouchDB, DataStax.

Key companies operating in the Global NoSQL Software market are also comprehensively studied in the report. The Global NoSQL Software report offers definite understanding into the vendor landscape and development plans, which are likely to take place in the coming future. This report as a whole will act as an effective tool for the market players to understand the competitive scenario in the Global NoSQL Software market and accordingly plan their strategic activities.

Global NoSQL Software Market Segmentation:  

NoSQL Software Market, By Type

• Document Databases
• Key-vale Databases
• Wide-column Store
• Graph Databases
• Others

NoSQL Market, By Application

• Social Networking
• Web Applications
• E-Commerce
• Data Analytics
• Data Storage
• Others

Players can use the report to gain sound understanding of the growth trend of important segments of the Global NoSQL Software market. The report offers separate analysis of product type and application segments of the Global NoSQL Software market. Each segment is studied in great detail to provide a clear and thorough analysis of its market growth, future growth potential, growth rate, growth drivers, and other key factors. The segmental analysis offered in the report will help players to discover rewarding growth pockets of the Global NoSQL Software market and gain a competitive advantage over their opponents.

Key regions including but not limited to North America, Asia Pacific, Europe, and the MEA are exhaustively analyzed based on market size, CAGR, market potential, economic and political factors, regulatory scenarios, and other significant parameters. The regional analysis provided in the report will help market participants to identify lucrative and untapped business opportunities in different regions and countries. It includes a special study on production and production rate, import and export, and consumption in each regional Global NoSQL Software market considered for research. The report also offers detailed analysis of country-level Global NoSQL Software markets.

Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=153255

What to Expect in Our Report?

(1) A complete section of the Global NoSQL Software market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.

(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Software market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.

(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Software market.

(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Software market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.

(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Software Market report.

Key Questions Answered in the Report:

(1) What are the growth opportunities for the new entrants in the Global NoSQL Software industry?

(2) Who are the leading players functioning in the Global NoSQL Software marketplace?

(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Software industry?

(4) What is the competitive situation in the Global NoSQL Software market?

(5) What are the emerging trends that may influence the Global NoSQL Software market growth?

(6) Which product type segment will exhibit high CAGR in future?

(7) Which application segment will grab a handsome share in the Global NoSQL Software industry?

(8) Which region is lucrative for the manufacturers?

For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-software-market/ 

About Us: Verified Market Research® 

Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions. 

Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc. 

We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research. 

Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide. 

Contact us:

Mr. Edwyne Fernandes

Verified Market Research®

US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768

Email: sales@verifiedmarketresearch.com

Website:- https://www.verifiedmarketresearch.com/

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: New OpenJDK JEPs, Payara Platform, Spring and Tomcat Updates, WildFly 28

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for April 17th, 2023 features news from OpenJDK, JDK 21, JMC 8.3.1, BellSoft, Spring Boot, Spring Security, Spring Session, Spring Authorization Server, Spring Integration, Spring for GraphQL and Spring Shell, WildFly 28, Payara Platform, Open Liberty 23.0.0.4-beta, Micronaut 3.9, Apache Tomcat updates, Ktor 2.3, JHipster Lite 0.32, JBang 0.106.3 and Gradle 8.1.1.

OpenJDK

JEP 446, Scoped Values (Preview), has been promoted from its JEP Draft 8304357 to Candidate status. Formerly known as Extent-Local Variables (Incubator), this JEP is now a preview feature following JEP 429, Scoped Values (Incubator), delivered in JDK 20. This JEP proposes to enable sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads.

JEP 447, Statements before super(), has been promoted from its JEP Draft 8300786 to Candidate status. This JEP, under the auspices of Project Amber, proposes to: allow statements that do not reference an instance being created to appear before the this() or super() calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Gavin Bierman, consulting member of technical staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback.

JEP 448, Vector API (Sixth Incubator), has been promoted from its JEP Draft 8305868 to Candidate status. This JEP, under the auspices of Project Panama, incorporates enhancements in response to feedback from the previous five rounds of incubation: JEP 438, Vector API (Fifth Incubator), delivered in JDK 20; JEP 426, Vector API (Fourth Incubator), delivered in JDK 19; JEP 417, Vector API (Third Incubator), delivered in JDK 18; JEP 414, Vector API (Second Incubator), delivered in JDK 17; and JEP 338, Vector API (Incubator), delivered as an incubator module in JDK 16. This feature proposes to enhance the Vector API to load and store vectors to and from a MemorySegment as defined by JEP 424, Foreign Function & Memory API (Preview).

JEP 449, Deprecate the Windows 32-bit x86 Port for Removal, has been promoted from its JEP Draft 8303167 to Candidate status. This feature JEP, introduced by George Adams, Senior Program Manager at Microsoft, proposes to deprecate the Windows x86-32 port with the intent to remove it in a future release. With no intent to implement JEP 436, Virtual Threads (Second Preview), in 32-bit platforms, removing support for this port will enable OpenJDK developers to accelerate development of new features.

JEP Draft 8305968, Integrity and Strong Encapsulation, and JEP Draft 8306275, Disallow the Dynamic Loading of Agents by Default, have been submitted by Ron Pressler, architect and technical lead for Project Loom at Oracle.

Integrity and Strong Encapsulation proposes to assure the integrity of code and data with a variety of features, such as strong encapsulation, that are enabled by default. Goals of this draft include: allow the Java platform to robustly maintain invariants required for maintainability, security and performance; and differentiate use cases where breaking encapsulation is convenient from use cases where disabling encapsulation is essential.

Disallow the Dynamic Loading of Agents by Default, following the approach of Integrity and Strong Encapsulation, proposes to disallow the dynamic loading of agents into a running JVM by default. Goals of this draft include: reassess the balance between serviceability and integrity; and ensure that a majority of tools, which do not need to dynamically load agents, are unaffected.

JDK Mission Control (JMC) 8.3.1 has been released with notable fixes such as: unable to open JMX Console after installing plugins in macOS and Linux; unable to edit Eclipse project run configurations after installing JMC plugins on Linux; and unable to perform flight recording on jLinked applications; More details on this release may be found in the release notes.

JDK 21

Build 19 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 18 that include fixes to various issues. Further details on this build may be found in the release notes.

For JDK 21, developers are encouraged to report bugs via the Java Bug Database.

JDK 20

JDK 20.0.1, the first maintenance release of JDK 20, along with security updates for JDK 17.0.7, JDK 11.0.19 and JDK 8u371 were made available as part of Oracle’s Releases Critical Patch Update for April 2023.

BellSoft

Also concurrent with Oracle’s Critical Patch Update (CPU) for April 2023, BellSoft has released CPU patches for versions 17.0.6.0.1, 11.0.18.0.1 and 8u371 of Liberica JDK, their downstream distribution of OpenJDK. In addition, Patch Set Update (PSU) versions 20.0.1, 17.0.7, 11.0.19 and 8u372, containing CPU and non-critical fixes, have also been released.

Spring Framework

The first release candidate of Spring Boot 3.1.0 delivers notable new features: improved Testcontainers support including support at development time; support for Docker Compose; enhancements to SSL configuration; and improvements for building Docker images. More details on this release may be found in the release notes.

The release of Spring Boot 3.0.6 primarily addresses CVE-2023-20873, Security Bypass With Wildcard Pattern Matching on Cloud Foundry, a vulnerability in which an application that is deployed to Spring Cloud for Cloud Foundry could be susceptible to a security bypass. Along with improvements in documentation and dependency upgrades, this release also provides notable bug fixes such as: integration of Spring Cloud for Cloud Foundry does not use endpoint path mappings; the ApplicationAvailability bean is auto-configured even if a custom one already exists; and default configuration substitutions in Apache Cassandra don’t resolve against configuration derived from the spring.data.cassandra properties file. More details on this release may be found in the release notes.

Similarly, the release of Spring Boot 2.7.11 also addresses the aforementioned CVE-2023-20873 and provides improvements in documentation, dependency upgrades and the same bug fixes as Spring Boot 3.0.6. More details on this release may be found in the release notes.

Versions 6.1.0-RC1, 6.0.3, 5.8.3 and 5.7.8 of Spring Security have been released to primarily address CVE-2023-20862, Empty SecurityContext Is Not Properly Saved Upon Logout, a vulnerability in which serialized versions of logout does not: properly clean the security context; and unable to explicitly save an empty security context to the HttpSessionSecurityContextRepository class. This results in users still being authenticated even after logout. More details on these releases may be found in the release notes for version 6.1.0-RC1, version 6.0.3, version 5.8.3 and version 5.7.8.

The first release candidate of Spring Session 3.1.0 delivers dependency upgrades and a new feature in which an instance of the StringRedisSerializer class is reused to eliminate the need to instantiate additional serializer instances. More details on this release may be found in the release notes.

The first release candidate of Spring Authorization Server 1.1.0 provides dependency upgrades and new features such as: support for device code and user code in the JdbcOAuth2AuthorizationService class; improvements in the OAuth 2.0 Device Authorization Grant that include adding tests and reference documentation; and improvements in the OpenID Connect 1.0 logout endpoint. More details on this release may be found in the release notes.

Similarly, versions 1.0.2 and 0.4.2 of Spring Authorization Server have also been released featuring dependency upgrades and notable bug fixes: return of an incorrect INVALID_CLIENT token error code to the correct INVALID_GRANT token error code; a broken support link; the authentication secret should be saved after encoding upon registration of the client; and a consideration that would allow the use of localhost in redirect URIs. More details on these releases may be found in the release notes for version 1.0.2 and version 0.4.2.

Version 6.1.0-RC1 and 6.0.5 of Spring Integration have been released that share notable changes such as: remove a trailing space in the IntegrationWebSocketContainer class; and improvements to the BaseWsInboundGatewaySpec and TailAdapterSpec classes that didn’t override super methods and threw instances of NullPointerException due to target field not populated. More details on these releases may be found in the release notes for version 6.1.0-RC1 and version 6.0.5.

The first release candidate of Spring for GraphQL 1.2.0 delivers new features such as: update the SchemaMappingInspector class to support Connection types; support for pagination with Querydsl and Query By Example; and overall support for pagination and sorting. More details on this release may be found in the release notes.

Versions 3.1.0-M2, 3.0.2 and 2.1.8 of Spring Shell have been released featuring shared notable changes such as: builds upon Spring Boot 3.1.0-M2, 3.0.5 and 2.7.10, respectively; a backport of bug fixes; and a significant fix for custom type handling with positional arguments. More details on these releases may be found in the release notes for version 3.1.0-M2, version 3.0.2 and version 2.1.8.

WildFly

Red Hat has released WildFly 28 that ships with improved support for observability and full support for Jakarta EE 10. WildFly has added support for Micrometer and the MicroProfile Telemetry specification, but has removed support for MicroProfile Metrics. JDK 17 is recommended for production applications, but Red Hat has seen good results on JDK 20. More details on this release may be found in the release notes and InfoQ will follow up with a more detailed news story.

Payara

Payara has released their April 2023 edition of the Payara Platform that includes Community Edition 6.2023.4, Enterprise Edition 6.1.0 and Enterprise Edition 5.50.0.

Community Edition 6.2023.4 delivers:a fix for a Payara 6 deployment error with JDK17 and Records; improvements in the SameSite cookie attributes in the Application Deployment Descriptor and a global HTTP network listener; and dependency upgrades to EclipseLink 4.0.1, EclipseLink ASM 9.4.0, Hazelcast 5.2.2 and ASM 9.4. More details on this release may be found in the release notes.

Similarly, Enterprise Edition 6.1.0 features: a fix for a Payara 6 deployment error with JDK17 and Records; improvements in the SameSite cookie attributes in the Application Deployment Descriptor; and dependency upgrades to EclipseLink 4.0.1, EclipseLink ASM 9.4.0, Hazelcast 5.2.2 and ASM 9.4 More details on this release may be found in the release notes.

Enterprise Edition 5.50.0 ships with: a resolution for CVE-2023-24998, a vulnerability in Apache Commons FileUpload in which an attacker can trigger a denial-of-service with malicious uploads due to the number of processed request parts is not limited; a fix for a Hazelcast NoDataMemberInClusterException; an improvement in the SameSite cookie attribute in the Application Deployment Descriptor; and a dependency upgrade to Hazelcast 5.2.2. More details on this release may be found in the release notes.

Open Liberty

IBM has released Open Liberty 23.0.0.4-beta featuring updated support for the Jakarta Data specification such that developers may now combine multiple ways of specifying ordering and sorting, defining a precedence. Sorting that is defined by the @OrderBy annotation or a query-by-method keyword is applied first, followed by the parameters from the Sort record on the method or the Pageable interface.

Micronaut

The Micronaut Foundation has released Micronaut Framework 3.9.0 that delivers new features such as: the ability to customize a package to write introspection with the targetPackage field of the @Introspected annotation; the ability to enable Cross Origin Resource Sharing (CORS) configuration via the @CrossOrigin annotation; a breaking change in which the configuration property, micronaut.server.cors.*.configurations.allowed-origins, does not support regular expressions to prevent an accidental exposure of a user’s API; and updates to modules such as: Micronaut Kubernetes, Micronaut Security, Micronaut CRaC, Micronaut Maven and Micronaut Launch. More details on this release may be found in the release notes.

Apache Software Foundation

The Apache Tomcat team has provided point releases for versions 11.0.0-M5, 10.1.8, 9.0.74 and 8.5.88. All four versions share notable changes such as: reduce the default value of maxParameterCount from 10000 to 1000; correct a regression in the fix for bug 66442 that meant that streams without a response body did not decrement the active stream count when completing, leading to an ERR_HTTP2_SERVER_REFUSED_STREAM for some connections; implementation of RFC 9239, Updates to ECMAScript Media Types, in which the MIME types for JavaScript has changed to text/javascript. More details on these releases may be found in the release notes for version 11.0.0-M5, version 10.1.8, version 9.0.74 and version 8.5.88.

Ktor

JetBrains has released version 2.3.0 of Ktor, the asynchronous framework for creating microservices and web applications, that include improvements and fixes such as: support for regular expressions when defining routes; drop support for the legacy JS compiler that will be removed in the upcoming release of Kotlin 1.9.0; support for Apache 5 and Jetty 11; and support for Structured Concurrency for Sockets. More details on this release may be found in the release notes.

JHipster

The JHipster team has released version 0.32.0 of JHipster Lite with many dependency upgrades and notable changes such as: support for Hibernate second-level cache by setting the spring.jpa.properties.hibernate.cache.use_second_level_cache property to true; remove an unnecessary warning upon executing the npm run lint command; and remove an unnecessary stack trace upon running the npm t command. More details on this release may be found in the release notes.

JBang

The release of JBang 0.106.3 fixes formatting for an issue where ChatGPT errors on bad keys or usage limits.

Gradle

Gradle 8.1.1 has been released that ships with bug fixes: a MethodTooLargeException when instrumenting a class with significant number of lambdas for the configuration cache; the Kotlin DSL precompiled script plugins built with Gradle 8.1 cannot be used with other versions of Gradle; and Gradle 8.1 configuration of the freeCompilerArgs method for Kotlin in buildSrc breaks a build with errors that are not useful. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Global NoSQL Database Market to Witness Exponential Rise in Revenue Share … – Coleman News

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

New Jersey, United States – Global NoSQL Database Market Insight, Forecast 2030” is recently published by verified Market Research. The analysts and researchers have performed primary as well as secondary research on a large scale with the help of various methodologies like Porter’s Five Forces and PESTLE Analysis. Key trends and opportunities that may emerge in the near future have been discussed in the Global NoSQL Database Market Report. A detailed analysis of the factors positively influencing the growth has been done by the professionals. Besides, factors that may act as key challenges for the participants are examined in the Global NoSQL Database Market Report. The Global NoSQL Database research report lays emphasis on the key trends and opportunities that may emerge in the near future and positively impact the overall industry growth. Key drivers that are fuelling the growth are also discussed in the Global NoSQL Database report. Additionally, challenges and restraining factors that are likely to curb growth in the years to come are put forth by the analysts to prepare the manufacturers for future challenges in advance.

In addition, market revenues based on region and country are provided in the Global NoSQL Database report. The authors of the report have also shed light on the common business tactics adopted by players. The leading players of the Global NoSQL Database market and their complete profiles are included in the report. Besides that, investment opportunities, recommendations, and trends that are trending at present in the Global NoSQL Database market are mapped by the report. With the help of this report, the key players of the Global NoSQL Database market will be able to make sound decisions and plan their strategies accordingly to stay ahead of the curve.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=129411

Key Players Mentioned in the Global NoSQL Database Market Research Report:

Objectivity Inc, Neo Technology Inc, MongoDB Inc, MarkLogic Corporation, Google LLC, Couchbase Inc, Microsoft Corporation, DataStax Inc, Amazon Web Services Inc & Aerospike Inc.

The competitive landscape is a critical aspect every key player needs to be familiar with. The report throws light on the competitive scenario of the Global NoSQL Database market to know the competition at both the domestic and global levels. Market experts have also offered the outline of every leading player of the Global NoSQL Database market, considering the key aspects such as areas of operation, production, and product portfolio. Additionally, companies in the report are studied based on the key factors such as company size, market share, market growth, revenue, production volume, and profits.

Global NoSQL Database Market Segmentation:  

NoSQL Database Market, By Type

• Graph Database
• Column Based Store
• Document Database
• Key-Value Store

NoSQL Database Market, By Application

• Web Apps
• Data Analytics
• Mobile Apps
• Metadata Store
• Cache Memory
• Others

NoSQL Database Market, By Industry Vertical

• Retail
• Gaming
• IT
• Others

Our market analysts are experts in deeply segmenting the Global NoSQL Database market and thoroughly evaluating the growth potential of each and every segment studied in the report. Right at the beginning of the research study, the segments are compared on the basis of consumption and growth rate for a review period of nine years. The segmentation study included in the report offers a brilliant analysis of the Global NoSQL Database market, taking into consideration the market potential of different segments studied. It assists market participants to focus on high-growth areas of the Global NoSQL Database market and plan powerful business tactics to secure a position of strength in the industry.

Global NoSQL Database market research study is incomplete without regional analysis, and we are well aware of it. That is why the report includes a comprehensive and all-inclusive study that solely concentrates on the geographical growth of the Global NoSQL Database market. The study also includes accurate estimations about market growth at the global, regional, and country levels. It empowers you to understand why some regional markets are flourishing while others are seeing a decline in growth. It also allows you to focus on geographies that hold the potential to create lucrative prospects in the near future.

Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=129411

What to Expect in Our Report?

(1) A complete section of the Global NoSQL Database market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.

(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Database market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.

(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Database market.

(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Database market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.

(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Database Market report.

Key Questions Answered in the Report:

(1) What are the growth opportunities for the new entrants in the Global NoSQL Database industry?

(2) Who are the leading players functioning in the Global NoSQL Database marketplace?

(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Database industry?

(4) What is the competitive situation in the Global NoSQL Database market?

(5) What are the emerging trends that may influence the Global NoSQL Database market growth?

(6) Which product type segment will exhibit high CAGR in future?

(7) Which application segment will grab a handsome share in the Global NoSQL Database industry?

(8) Which region is lucrative for the manufacturers?

For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-database-market/ 

About Us: Verified Market Research® 

Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions. 

Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc. 

We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research. 

Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide. 

Contact us:

Mr. Edwyne Fernandes

Verified Market Research®

US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768

Email: sales@verifiedmarketresearch.com

Website:- https://www.verifiedmarketresearch.com/

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Real time ML pipelines using Quix with Tomáš Neubauer

MMS Founder
MMS Tomas Neubauer

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Introduction [00:44]

Roland Meertens: Welcome everyone to the InfoQ podcast. My name is Roland Meertens, your host today, and I will be interviewing Tomas Neubauer. He is the CTO and founder of Quix. We are talking to each other in person at the QCon London conference where he gave the presentation simplifying realtime ML pipelines with Quix Streams and Opensource Python library from ML engineers. Make sure to watch his presentation as it delivers tremendous insights and to review time ML pipelines and how to get started with Quix Streams yourself.

During today’s interview, we will dive deeper into the topic of real-time ML. I hope you enjoy it and I hope you can learn something from it.

Tomas. Welcome to the InfoQ podcast.

Tomáš Neubauer:

Thank you for having me.

Roland Meertens: You are giving your talk tomorrow here at QCon London. Can you maybe give a short summary of your talk?

About Quix Streams [01:37]

Tomáš Neubauer: Sure, yeah. So I’m talking about the open-source library Quix Streams. It’s a Python stream processing library for data and workloads on top of Kafka. And I’m talking about how to use this library in projects that involve realtime machine learning. And I’ll talk about the landscape, different architecture designs to solve this problem, pros and cons of each. And then I put this against a real use case, which I, at the end of the presentation, develop on stage from scratch. And in this case, it’s detecting a cyclist crash. So imagine a fitness app running on your handlebars and you crashed and you want to inform your relatives or emergency services.

Roland Meertens: So then you are programming this demo live on stage. Which programming language are you using for this?

Tomáš Neubauer: Yes, I’m using the Opensource library Quix Stream. So I’m using Python and yeah, I’m basically starting with having just data from the app, telemetry data like g-force sensor, GPS-based location, speed, et cetera. And I use a machine learning model that has been trained on history data to detect that the cyclists crashed.

Roland Meertens: And what kind of machine learning model is this?

Tomáš Neubauer: It’s a sensor flow model and we basically train it beforehand, so that’s not done on the stage and we label data correctly and train it in Google Colab. And I’m going to talk about how to get that model from that Colab to production.

What is Real Time ML? [02:40]

Roland Meertens: And so if you’re talking about real-time machine learning, what do you mean with real time? How fast is real time? When can you really say this is real time ML?

Tomáš Neubauer: Well, real time in this case will be five times per second. We will receive telemetry data points from the cyclist. So all of these parameters that I mentioned will be five times per second stream to the cloud. And we will with, I would say 50 milliseconds, delay. Inform either the services or a consuming application that there was a crash. There’s no one hour, one day, one minute delay.

Roland Meertens: Okay. So you get this data from your smart device and you are cutting this up into chunks which are then sent in to your API or to your application?

Tomáš Neubauer: So we’re streaming this data directly through the pipeline without batching anything. So basically it’s coming piece by piece and we are not waiting for anything either. So every 200 milliseconds we do this detection and either say this is not a crash or this is a crash. And in the end of the presentation, I will have a simple front end application with a map and alert because obvious I’m not going to crash a bike on the stage. I’m going to have a similar model that will detect shaking with the phone and I’m going to show everyone that the shake is detected.

Data for Real Time ML [04:19]

Roland Meertens: And where does this come from? How did you get started with this?

Tomáš Neubauer: The roots of this idea, for this Opensource library is coming from my previous job where I was working in McLaren and I was leading a team that was connecting F1 cars to the cloud and therefore to the factory. So people don’t have to travel every second weekend around the world to build real-time decision insight. What I mean by that is basically deciding in a split second that the car needs different tires, different settings for the wing, et cetera. And it was a challenging use case, lots of data around 30 million numbers per minute from each car. And so we couldn’t use any database technology that I’m going to talk about in a presentation and we had to adapt streaming technology. But the biggest problem we faced actually was to get this technology to the hands of our functional team, which were made of mechanical engineers, ML engineers, data scientists. They all use Python and really struggled to use this new tech that we gave them.

Roland Meertens: And how should I see this? So you have this car going over circuits, generating a lot of data, this sends it back to some kind of ground station and then do you have humans making decisions real time or is this also ML models which are making decisions?

Tomáš Neubauer: The way how it works is that in a car, there are sensors that’s collecting data. Some of them are even more than kilohertz or more than a thousand numbers per second, that is streamed over the radio to the garage where there’s a direct connection to the cloud. And then through the cloud infrastructure, it’s being consumed in a factory where people during the week, building new models. And then in a race day there is plenty of screens in the garage where there are dashboards and different waveforms which basically visualizing the result of these models. So the people in the garage can immediately decide that car need something else.

Roland Meertens: And so this is all part of the race strategy where people need to make decisions in split seconds and this needs the data to be available and the models to run in split seconds?

Tomáš Neubauer: Yes, exactly. And basically during my time in Mclaren, we took that know-how from racing and actually applied outside. So at the end we end up doing the same thing for high-speed railway in Singapore where basically we were using machine learning to detect break and suspension deterioration based on the history of data. So there are certain vibration signatures that will lead to a deterioration of the object.

Programming languages for Real Time ML [06:45]

Roland Meertens: And you were talking about different programming languages like either Java or Python. How does this integrate with what you’re working on?

Tomáš Neubauer: Basically, the whole streaming world is traditionally Java-based. Most of the brokers are built in Java or Scala. And as a result, most of the tools around it and most of the libraries and frameworks are built in Java. Although there are some ports and some libraries that let you use these libraries for Python, although there are just a few of them. It’s quite painful because this connection doesn’t really work well and therefore it’s quite difficult for patent people, especially people from data teams to leverage this stack. And as a result, most of the projects really doesn’t work that way. And most of the people work in Jupyter Notebooks and silos and then software engineering taking these models into production.

Roland Meertens: So what do you do to improve this?

Tomáš Neubauer: What I believe is that unless data team work directly on product, it’s never going to work really well because people don’t see the result of their work immediately and they are dependent on other teams. And every time that one team is dependent on another, it just kills innovation and efficiency. So the idea of this is that a data team directly contribute to a product and can test and develop straightaway. So the code doesn’t run in Jupyter Notebook or stays there but actually goes to realtime pipelines and so people can immediately see a result of their work on a physical thing.

Roland Meertens: And you mentioned that there’s different ways people can orchestrate something like this. There’s different ML architectures you could use or you could use for such an approach. Which ones are there?

Tomáš Neubauer: So there’s many options to choose, from all different dimensions that you look at the architecture of building such a system. But one of them is obviously if you’re going to go for batch or streaming. So are you going to use technology like Spark and reactive data in batches or you need a real time system where you need to use something like Kafka or Pulsar or other streaming technologies. And the second thing is how you actually going to use your ML models?

So you can deploy them behind the API or you can actually embed them to a streaming transformation and discuss both pros and cons of each solution.

Roland Meertens: And what do you mean with a streaming transformation?

Tomáš Neubauer: This is a fundamental major concept of what I’m going to talk about, which is a pub and sub service. So basically we are going to subscribe in our model to a topic where we are going to get input data from the phone and we going to output the results. Therefore, is there a crash or no? And this is the major architectural cornerstone of this approach.

The tools needed [09:22]

Roland Meertens: Okay. And you mentioned for example, Kafka and you mentioned some other tools. How does your work then relate to this?

Tomáš Neubauer: Well, what we found out is that Kafka, although it’s powerful, it’s quite difficult to use. So we have built a level abstraction on top of it. Then we found that that’s not enough actually because streaming on itself introduce complexities and different approaches to common problems. I have a nice example of that tomorrow. So we are building abstraction on top of streaming concept as well, which means that you would operate and you would develop your code in Python as it would be in Jupyter Notebook. So what you are used to when you working with a static data would apply to building a streaming transformation.

Roland Meertens: And how do you do this? How can people test this with a pre-recorded stream which they then replay and can you still use a Jupyter Notebook or do you as a machine learning or as a data scientist, do you then use and lose part of your tooling?

Tomáš Neubauer: So the Quix Stream is Opensource library that you can just download from paper and use and connect to your broker. If you don’t have a broker, you can set it up. It’s Opensource software as well. If you don’t want to, you can use our manage broker as well, doesn’t matter, it works the same. And then we have some Opensource simulators of data that you can use if you don’t have your own. So for example, we have F1 simulator which will give you higher solution data, so that’s quite cool. You can also, for example, subscribe to Reddit and get messages on Reddit or you can use the app I’m going to show you tomorrow. It’s also Opensource, so you can install it from up store or possibly you can even clone it and change it to suit your need and deploy by yourself.

Different modalities [11:06]

Roland Meertens: So then Quix handles both text messages but also audio or what kind of data do you handle?

Tomáš Neubauer: Yeah, so we handle time series data, which involves a numerical and string values. Then we handle pioneer data, which is audio and video and geospatial, et cetera. Where we allow developers to just attach this and the column and then we have a metadata. So you don’t have to repeat for example that this bike has a VMware 1.5. You just send it once and the stateful pipeline will persist that information. And then at the end you also can send events. So for example, crash is a good example of event, it doesn’t have any continuous information.

Roland Meertens: Okay. So can you also connect these pipelines such that one pipeline for example gets all the information from your sensor and then sends events to another pipeline? Is this something which is sustainable?

Tomáš Neubauer: Yes. So the whole idea of building systems with this approach is to building pipelines. So each Node in your architecture is a container that connects to one or more input topics and output results to one or more output topics. You create a pipeline that has multiple branches, sometimes they join back together, sometimes they end and when they end they usually either go to database or back to the product. And same is with the stats, they could be from your product or could be CDC from database. So you have multiple sources, multiple destinations, and in the middle you have one or more transformations.

Roland Meertens: And is there some kind of limit to the amount of input or the amount of consumers you have for a pipeline?

Tomáš Neubauer: There isn’t really limit to number of transformations or sources. One thing is that Kafka is designed to be one to one or one to a small number of consumers and producers. So if you have a use case like we going to do today with the phones where you can possibly have thousands or millions users, you need to put some gateway between your devices and Kafka, which we’ll do. And in our case it’ll be a web socket gateway collecting data and then funneling it to topic.

Roland Meertens: Okay. So do you still have some queue in between?

Tomáš Neubauer: There’s really any queue in between, but there’s a queue obviously in Kafka. So as the data flowing to the gateway, they’re being put to the queue in topic and then the services listening to it will just collect, consume and process this data from that queue.

Use cases for Real Time ML [13:33]

Roland Meertens: You already have some consumers who are using this in some creative or interesting ways? What’s the most interesting use cases you’ve seen?

Tomáš Neubauer: Yes, so one really cool use case is from healthcare where there’s sensors on your lung and listening to your breathing and then being sent to the cloud. And machine learning is used to detect different illnesses that you have and that’s all going to the company app. So it’s quite similar to what we are going to do here. Then second quite interesting use cases in a public transport are wifi sensors detecting the occupation of the underground stations and automatically closing opening doors and sending people to a less occupied part of the stations.

Roland Meertens: Oh, interesting. So then you have some signal which tells you how many people are in certain parts of the station?

Tomáš Neubauer: Yes, correct. So you have the realtors all around the station, and then in real time you know that in the north part of the station there is more people than in the south and therefore it will be better if people come from the south and you can do this in a split second.

The implementation [14:33]

Roland Meertens: Oh, interesting. And then in terms of this implementation, if we, for example, want to have some machine learning model act on it, are there specific limitations or specific frameworks you have to use?

Tomáš Neubauer: Basically the beauty of this approach is, and I think that’s why it’s so suited to machine learning, is that it’s just a patent at the end where all the magic happening. So you read data from Kafka into Python, and then in that code you are free to do whatever you want. So that could be using any PIP package out there, you can use the library like open CV for image processing and really anything that is possible in Python, it’s possible with this approach. And then you just output it again with the Python interface. So there’s no black box operation, there’s no domain specific language that you will find in Flink.

Roland Meertens: Do I basically just say, “Whenever you have a new piece of data, call this Python function with these augments?”

Tomáš Neubauer: Correct. And even more than python functions, you can build python classes with all the structure that you are using in Python. You can also try in Jupyter Notebook, so the library will work in a cell in Jupyter Notebook. So again, there’s basically a freedom of deployment in running this code anywhere, it’s just a python.

Roland Meertens: If people are listening and they’re beginners in realtime machine learning, how would you get started? What would you recommend to people?

Tomáš Neubauer: Well, first of all, what I’m doing here today, it’s available as a tutorial, all the codes is Opensource, so you can basically try it by yourself. There are other tutorials that we have published that going to are different use cases and going step by step from literally installing Python, installing Kafka, things like that to get this going from the start. So I recommend to people to go to docs that we have for the library. There are tutorials and there are some concepts described. What is the detail of this? So yeah, that would be the best place to start.

Roland Meertens: Are there specific concepts which are difficult to grasp or is it relatively straightforward?

Tomáš Neubauer: What is really complicated is stateful processing that we are trying to solve and abstract from. But if you are interested to learn more about stateful processing, we have it in the docs explained. That’s a very interesting concept and it will open the intricacy of the stream processing. But I think the goal of the library really is to make it simpler. Obviously, it’s a journey, but I’m confident that we already have done a great job in making it a bit easier than it was.

Roland Meertens: Thank you very much. Thank you for joining the podcast and good luck with your talk tomorrow and hopefully people can watch the recording online.

Tomáš Neubauer: Tthank you for having me

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ASP.NET Core Updates in .NET 8 Preview 3: Native AOT Support and More

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Recently Microsoft released .NET 8 Preview 3. This new release contains many new improvements to ASP.NET Core such as support for native AOT, server-side rendering with Blazor, rendering Razor components outside of ASP.NET Core, sections support in Blazor or monitoring Blazor Server circuit activity. 

In .NET 8 Preview 3, native AOT support for ASP.NET Core was added. Thanks to that it is possible to publish an ASP.NET Core application with native AOT, creating a standalone application that is compiled ahead of time (AOT) into native code. Publishing and deploying a native AOT application can reduce the following things such as disk size, memory demand and startup time. 

Microsoft developers launched a simple ASP.NET Core API application to compare the differences in application size, memory usage, runtime and CPU load published with and without native AOT. Publishing the application as a native AOT improves start-up time and application size. In the experiment, start-up time was reduced by 80% and application size by 87%. These and other metrics are available on Microsoft’s public benchmarking dashboard.

However, not all features and libraries in ASP.NET Core are compatible with native AOT. The .NET 8 platform represents the beginning of work to include native AOT in ASP.NET Core, with an initial focus on including application support using Minimal APIs or gRPC, and deployed in cloud-native environments. A table showing the compatibility of ASP.NET Core features with the native AOT is attached in the article with the announcement of ASP.NET Core updates in .NET 8 Preview 3.

This preview version also adds initial support for server-side rendering using Blazor components. This is the start of the Blazor unification work to enable the usage of Blazor components for all web UI needs, client-side and server-side. Blazor components are available for server-side rendering without the need for any .cshtml files. The framework will discover Blazor components with routing support and configure them as endpoints. There are no WebAssembly or WebSocket connections and no necessity to load any JavaScript. Each request is handled separately by the Blazor component for the corresponding endpoint.

The work to enable server-side rendering with Blazor components resulted that it is now possible to render Blazor components outside the context of an HTTP request. Razor components can be rendered as HTML directly into a string or stream regardless of the ASP.NET Core hosting environment. This is helpful in scenarios where you want to generate HTML fragments.

Another point related to Blazor is the addition of the SectionOutlet and SectionContent components. They provide support for identifying outlets for content to be filled in later. Sections are often used to define placeholders in layouts that are then populated by specific pages. Sections are referenced by either a unique name or a unique object identifier.

Moreover, it is now an option to monitor inbound circuit activity in Blazor Server applications using the new CreateInboundActivityHandler method in CircuitHandler. Inbound circuit activity is any activity sent from the browser to the server, such as user interface events or JavaScript-to-.NET inter-operational calls.

The improvements added to ASP.NET Core received positive feedback from the community. .NET developers left many comments under the release announcement. They appreciated especially the focus on performance and AOT compilation. There was also a question from Ömer Kaya about the availability of Blazor United in .NET 8. Daniel Roth, a principal program manager at Microsoft, answered:

The Blazor United effort is really a collection of features we’re adding to Blazor so that you can get the best of server & client-based web development. These features include: Server-side rendering, streaming rendering, enhanced navigations & form handling, adding client interactivity per page or component, and determining the client render mode at runtime. We’ve started delivering server-side rendering support for Blazor with .NET 8 Preview 3, which is now available to try out. We plan to deliver the remaining features in upcoming previews. We hope to deliver them all for .NET 8, but we’ll see how far we get.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Android 14 Beta 1 Hits the Block

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Now available to developers, the first beta of Android 14 focuses on privacy, security, performance, developer productivity, and user customization. In addition, it improves user experience with large-screen devices on tablets and foldables.

To better protect sensitive data, Android 14 introduces the new accessibilityDataSensitive attribute. This attribute can be used by apps to enable access to specific data and views only to Google’s and third-party services meant to help users with disabilities.

If an app uses this new attributes, its visibility will be in fact limited to apps that declare the isAccessibilityTool attribute. Play Protect is the mechanism responsible to scan apps when they are downloaded from the Play Store and make sure they use the isAccessibilityTool attribute only if they are actually meant to help people with disabilities.

Google says that there are two main use cases where apps can benefit from this new feature: protecting user data from third-party access and preventing critical actions being carried through, such as authorizing a payment using a credit card. The importance of this feature cannot be underestimated, since it brings fully under developer control which data an app considers sensitive and thus protected from general external access.

Additionally, Android 14 beta improves a number of system UI elements, including a new, more prominent back arrow and a customizable share sheet.

Apps can add custom actions to the system share sheet creating a ChooserAction which will be shown to the user when Intent.EXTRA_CHOOSER_CUSTOM_ACTIONS is invoked. This will have the effect of displaying a separate row of app-specific actions on top of the cross-system action row.

The new share sheet makes it also easier to go back to the invoking app and add new items to those being shared. Finally, the UI has been improved by allowing you to scroll, in case you are sharing a large number of images, and to mix text and images.

Android 14 beta 2 will become available during Google I/O next month, and beta 3 in June. Android beta 4, coming in July, will be the final beta before the official release.

For a full list of all changes in Android 14 beta, do not miss this Twitter thread by Mishal Rahman, co-host of the All About Android show,

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.