ROSEN, RECOGNIZED INVESTOR COUNSEL, Encourages MongoDB, Inc. Investors to …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

New York, New York–(Newsfile Corp. – July 27, 2024) – WHY: Rosen Law Firm, a global investor rights law firm, reminds purchasers of securities of MongoDB, Inc. (NASDAQ: MDB) between August 31, 2023 and May 30, 2024, both dates inclusive (the “Class Period”), of the important September 9, 2024 lead plaintiff deadline.

SO WHAT: If you purchased MongoDB securities during the Class Period you may be entitled to compensation without payment of any out of pocket fees or costs through a contingency fee arrangement.

WHAT TO DO NEXT: To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email case@rosenlegal.com for information on the class action. A class action lawsuit has already been filed. If you wish to serve as lead plaintiff, you must move the Court no later than September 9, 2024. A lead plaintiff is a representative party acting on behalf of other class members in directing the litigation.

WHY ROSEN LAW: We encourage investors to select qualified counsel with a track record of success in leadership roles. Often, firms issuing notices do not have comparable experience, resources or any meaningful peer recognition. Many of these firms do not actually litigate securities class actions, but are merely middlemen that refer clients or partner with law firms that actually litigate the cases. Be wise in selecting counsel. The Rosen Law Firm represents investors throughout the globe, concentrating its practice in securities class actions and shareholder derivative litigation. Rosen Law Firm has achieved the largest ever securities class action settlement against a Chinese Company. Rosen Law Firm was Ranked No. 1 by ISS Securities Class Action Services for number of securities class action settlements in 2017. The firm has been ranked in the top 4 each year since 2013 and has recovered hundreds of millions of dollars for investors. In 2019 alone the firm secured over $438 million for investors. In 2020, founding partner Laurence Rosen was named by law360 as a Titan of Plaintiffs’ Bar. Many of the firm’s attorneys have been recognized by Lawdragon and Super Lawyers.

DETAILS OF THE CASE: According to the lawsuit, throughout the Class Period, defendants created the false impression that they possessed reliable information pertaining to the Company’s projected revenue outlook and anticipated growth while also minimizing risk from seasonality and macroeconomic fluctuations. In truth, MongoDB’s sales force restructure, which prioritized reducing friction in the enrollment process, had resulted in complete loss of upfront commitments; a significant reduction in the information gathered by their sales force as to the trajectory for the new MongoDB Atlas enrollments; and reduced pressure on new enrollments to grow. Defendants misled investors by providing the public with materially flawed statements of confidence and growth projections which did not account for these variables. When the true details entered the market, the lawsuit claims that investors suffered damages.

To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email case@rosenlegal.com for information on the class action.

No Class Has Been Certified. Until a class is certified, you are not represented by counsel unless you retain one. You may select counsel of your choice. You may also remain an absent class member and do nothing at this point. An investor’s ability to share in any potential future recovery is not dependent upon serving as lead plaintiff.

Follow us for updates on LinkedIn: https://www.linkedin.com/company/the-rosen-law-firm, on Twitter: https://twitter.com/rosen_firm or on Facebook: https://www.facebook.com/rosenlawfirm/.

Attorney Advertising. Prior results do not guarantee a similar outcome.

——————————-

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/217992

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


2024-07-27 | ROSEN, RECOGNIZED INVESTOR COUNSEL, Encourages MongoDB, Inc …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

New York, New York–(Newsfile Corp. – July 27, 2024) – WHY: Rosen Law Firm, a global investor rights law firm, reminds purchasers of securities of MongoDB, Inc. (NASDAQ: MDB) between August 31, 2023 and May 30, 2024, both dates inclusive (the “Class Period”), of the important September 9, 2024 lead plaintiff deadline.

SO WHAT: If you purchased MongoDB securities during the Class Period you may be entitled to compensation without payment of any out of pocket fees or costs through a contingency fee arrangement.

WHAT TO DO NEXT: To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email case@rosenlegal.com for information on the class action. A class action lawsuit has already been filed. If you wish to serve as lead plaintiff, you must move the Court no later than September 9, 2024. A lead plaintiff is a representative party acting on behalf of other class members in directing the litigation.

WHY ROSEN LAW: We encourage investors to select qualified counsel with a track record of success in leadership roles. Often, firms issuing notices do not have comparable experience, resources or any meaningful peer recognition. Many of these firms do not actually litigate securities class actions, but are merely middlemen that refer clients or partner with law firms that actually litigate the cases. Be wise in selecting counsel. The Rosen Law Firm represents investors throughout the globe, concentrating its practice in securities class actions and shareholder derivative litigation. Rosen Law Firm has achieved the largest ever securities class action settlement against a Chinese Company. Rosen Law Firm was Ranked No. 1 by ISS Securities Class Action Services for number of securities class action settlements in 2017. The firm has been ranked in the top 4 each year since 2013 and has recovered hundreds of millions of dollars for investors. In 2019 alone the firm secured over $438 million for investors. In 2020, founding partner Laurence Rosen was named by law360 as a Titan of Plaintiffs’ Bar. Many of the firm’s attorneys have been recognized by Lawdragon and Super Lawyers.

DETAILS OF THE CASE: According to the lawsuit, throughout the Class Period, defendants created the false impression that they possessed reliable information pertaining to the Company’s projected revenue outlook and anticipated growth while also minimizing risk from seasonality and macroeconomic fluctuations. In truth, MongoDB’s sales force restructure, which prioritized reducing friction in the enrollment process, had resulted in complete loss of upfront commitments; a significant reduction in the information gathered by their sales force as to the trajectory for the new MongoDB Atlas enrollments; and reduced pressure on new enrollments to grow. Defendants misled investors by providing the public with materially flawed statements of confidence and growth projections which did not account for these variables. When the true details entered the market, the lawsuit claims that investors suffered damages.

To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email case@rosenlegal.com for information on the class action.

No Class Has Been Certified. Until a class is certified, you are not represented by counsel unless you retain one. You may select counsel of your choice. You may also remain an absent class member and do nothing at this point. An investor’s ability to share in any potential future recovery is not dependent upon serving as lead plaintiff.

Follow us for updates on LinkedIn: https://www.linkedin.com/company/the-rosen-law-firm, on Twitter: https://twitter.com/rosen_firm or on Facebook: https://www.facebook.com/rosenlawfirm/.

Attorney Advertising. Prior results do not guarantee a similar outcome.

——————————-

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/217992

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Releases User Guide for the Digital Operational Resilience Act (DORA)

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

Amazon recently released the AWS User Guide to the Digital Operational Resilience Act (DORA). The document details how AWS services support financial entities in complying with DORA’s requirements for operational resilience, including ICT risk management, incident reporting, testing, and third-party risk management.

Released over a year after submitting a response to the consultation on the second batch of DORA technical standards, the new guide offers a series of considerations for financial entities (FEs) seeking to meet the regulatory expectations set by DORA. It explains how FEs can utilize AWS services and documentation to help demonstrate their compliance with DORA requirements.

As the financial sector becomes increasingly dependent on technology and a few cloud companies to deliver financial services, DORA introduces new regulatory requirements to achieve a high common level of digital operational resilience. It entered into force on January 16, 2023, and will require compliance by January 17, 2025.

Stephen Martin, head of security and compliance for financial services industries at AWS, Akshay Dalal, EMEA regulatory risk and compliance at AWS, and Eduardo Vilela, Head FSI reg. enablement EMEA at AWS, explain:

This guide describes the roles that AWS and its customers play in managing operational resilience in and on AWS, describes the AWS Shared Responsibility Model, compliance frameworks, AWS services, and features, and measures that customers use to evaluate their compliance with sample DORA requirements when adopting AWS.

The new European regulation covers ICT risk management requirements, reporting major ICT-related incidents and cyber threats, digital operational resilience testing, and information sharing on cyber threats and vulnerabilities. It includes measures for managing ICT third-party risk across 20 different types of financial entities and ICT third-party service providers, including major cloud providers. Maria E. Tsani, head of financial services public policy EMEA at AWS, previously wrote:

Our lack of visibility into data uploaded into a customer’s AWS account is a fundamental part of the governance model that operates in a cloud environment (the AWS Shared Responsibility Model).

While the regulation does not set any restrictions on the adoption and use of cloud services, Martin, Dalal, and Vilela add:

The regulation promotes a principles-based approach to ICT risk management, giving FEs the flexibility to use different management models as long as they address key functions such as identification, protection, detection, response, recovery, and communications.

One of the debated topics is the reliance on a single cloud provider. András Gerlits, founder at omniledger.io, comments:

Confusingly, DORA says you are legally allowed to use your exclusive cloud provider, but disallows this technically. It does this by expecting banks to have a monitoring, a mitigation and a recovery strategy in place in case of a disruption event. So sure, use your AWS/Azure/GCP for everything, but you must also be able to shift immediately with no data loss.

AWS is not the only cloud provider recently outlining its steps towards DORA compliance. Google has simplified the process with Google Cloud’s updated contracts and Microsoft has explained how to strengthen operational resilience and reduce concentration risk in financial services. IBM and Oracle OCI also provide dedicated resources.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon EC2 R8g Instances with AWS Graviton4 Processors Generally Available

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS has announced the general availability of Amazon EC2 R8g instances, which use AWS Graviton4 processors. These instances have been available in preview since November 2023 and are designed for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics.

The AWS Graviton4 inside the R8g instances is the latest generation of processors designed by the company specifically for cloud computing. These processors are built using ARM architecture, known for its high performance and energy efficiency.

Esra Kayabali, a senior solutions architect at AWS, wrote:

If you are looking for more energy-efficient compute options to help you reduce your carbon footprint and achieve your sustainability goals, R8g instances provide the best energy efficiency for memory-intensive workloads in EC2.

According to the company AWS, Graviton4-based Amazon EC2 instances offer up to 3x more vCPU and memory than Graviton3-based instances, with speeds up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications. The R8g instances come in 12 sizes, including two bare metal sizes, and offer enhanced networking bandwidth of up to 50 Gbps and up to 40 Gbps to Amazon EBS. Elastic Fiber Adapter (EFA) networking support is available on select sizes, and Elastic Network Adapter (ENA) Express support is offered on instance sizes larger than 12xlarge.

Furthermore, they are built on the AWS Nitro System to enhance performance and security by offloading CPU virtualization, storage, and networking functions to dedicated hardware and software.

The R8g instances are ideal for all Linux-based workloads, including containerized and micro-services-based applications. These applications can be built using Amazon EKS, Amazon ECS, Amazon ECR, Kubernetes, Docker, and popular programming languages like C/C++, Rust, Go, Java, Python, .NET Core, Node.js, Ruby, and PHP.

The R8g instances are part of the memory-optimized instance family in Amazon EC2, including R5, R6g, and R7g instances. The main differences among these instances are the processor, memory, and instance size.

Ayush Jain, an analyst at TechInsights, wrote in a LinkedIn post:

His colleague Owen Rogers ran a series of simple performance benchmarks on AWS’s r6g.large, r7g.large, and r8g. large EC2 instances, which are based on Graviton 2, 3, and 4 CPUs, respectively.

Broadly, the benchmark results show that AWS continues to strengthen the Graviton momentum while maintaining a healthy balance between performance and price.

Currently, the R8g instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Frankfurt). Users can purchase R8g instances as Reserved Instances, On-Demand, Spot Instances, and via Savings Plans. More details on pricing are available on the EC2 pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JavaScript Set Methods

MMS Founder
MMS Agazi Mekonnen

Article originally posted on InfoQ. Visit InfoQ

The release of Firefox 127 introduces new JavaScript Set methods, including intersection(), union(), difference(), symmetricDifference(), isSubsetOf(), isSupersetOf(), and isDisjointFrom() now supported across major browser engines. Polyfills are no longer needed to make them work everywhere. These additions provide convenient, built-in ways to manipulate and compare collections aiming to simplify development and enahnce performance.

JavaScript Sets function similarly to Arrays but guarantee the uniqueness of each value. This automatic removal of duplicates makes Sets perfect for creating unique collections. For instance, here’s a simple example of creating and adding elements to a Set:


const users = new Set();
const alice = { id: 1, name: "Alice" };
users.add(alice);

users.forEach(user => { console.log(user) });
    

Sets are also typically faster for checking if an element exists compared to Arrays, making them useful for performance-sensitive applications.

The union() method returns a new Set containing elements from both the original Set and the given Set. This is useful for combining collections without duplicates:


const set1 = new Set(["Alice", "Bob", "Charlie"]);
const set2 = new Set(["Bob", "Charlie", "David"]);
const unionSet = set1.union(set2);

unionSet.forEach(name => {
  console.log(name); // Outputs: Alice, Bob, Charlie, David
});
    

The “intersection()” method returns a new Set containing only elements present in both Sets. This is helpful for finding common elements:


const intersectionSet = set1.intersection(set2);

intersectionSet.forEach(name => {
  console.log(name); // Outputs: Bob, Charlie
});
    

The symmetricDifference() method returns a new Set containing elements present in either of the Sets but not in both. This is useful for finding unique elements between two Sets:


const symmetricDifferenceSet = set1.symmetricDifference(set2);

symmetricDifferenceSet.forEach(name => {
  console.log(name); // Outputs: Alice, David
});
    

The difference() method returns a new Set containing elements present in the original Set but not in the given Set. This is useful for subtracting elements:


const set1Only = set1.difference(set2);

set1Only.forEach(name => {
  console.log(name); // Outputs: Alice
});
    

The methods isSubsetOf() and isSupersetOf() return Boolean values based on the relationship between Sets. The “isSubsetOf()” method checks if all elements of a Set are in another Set, while the isSupersetOf() method determines if a Set contains all elements of another Set.


const subset = new Set(["Alice", "Bob"]);
const superset = new Set(["Alice", "Bob", "Charlie"]);

if (subset.isSubsetOf(superset)) {
  console.log("subset is a subset of superset"); // This will be printed because all elements in subset are also in superset
} else {
  console.log("subset is not a subset of superset");
}

if (superset.isSupersetOf(subset)) {
  console.log("superset is a superset of subset"); // This will be printed because all elements in subset are also in superset
} else {
  console.log("superset is not a superset of subset");
}
    

The isDisjointFrom() method checks if two Sets have no common elements:


const set3 = new Set(["Eve", "Frank", "Gina"]);

if (set1.isDisjointFrom(set2)) {
  console.log("Set1 and Set2 are disjoint"); // This will be printed because set1 and set2 have no common elements
} else {
  console.log("Set1 and Set2 are not disjoint");
}

if (set1.isDisjointFrom(set3)) {
  console.log("Set1 and Set3 are disjoint");
} else {
  console.log("Set1 and Set3 are not disjoint"); // This will be printed because set1 and set3 have a common element "Charlie"
}
    

The community has responded positively to these new methods. In a Reddit thread, user peterlinddk said:

“Excellent – finally we can use Set for more than a ‘duplicate-detector’. I only wish that there were some way for objects to be ‘equal’ without them having to be the exact same instance. Kind of like Java’s .equals and .hashCode methods.”

Another user, Pelopida92, praised the performance benefits, stating:

“Sets are awesome. I used them extensively for some big-data scripts, as they have way better performance than arrays and are very easy to use and convenient.”

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Hydration and Lazy-Loading Are Incompatible

MMS Founder
MMS Misko Hevery

Article originally posted on InfoQ. Visit InfoQ

Transcript

Hevery: I’m Miško. I like to start my talks with a joke. I like dad jokes. How do functions break up? They stop calling each other. This is me, CTO Builder.io. I did this thing called AngularJS and Angular. Also had something to do with Karma. Now I can’t seem to stop this habit, and so I created a new framework called Qwik. I work for Builder.io. I’m a CTO. What we do is we make a headless visual CMS. When I first joined, I had no idea what that means. Let me break this down. It’s basically Wix, you can drag and drop, build websites. Wix is great. The problem is, it’s hosted on Wix website. You can’t have your own existing infrastructure and own existing applications on Wix. Builder.io is a npm install of the editor into your code base. Whether you use React, Vue, Svelte, Angular, Qwik, doesn’t matter. Then you can register your own components, and now your marketing team can drag and drop components and build things. It’s on your existing infrastructure, so you don’t have to go somewhere else. This is our awesome open source team.

Basically, we have Qwik, we have Partytown, we have Mitosis. Partytown allows you to run third party code in web workers. I am a big fan of getting stuff off of the main thread, because I think running stuff on main thread is a performance bottleneck, and it’s a problem. It’s what I’m going to talk about. Mitosis allows you to write code once and it generates that code for all existing frameworks. It’s not a wrapper. It’s a canonical code that you would have written with your own hands. We have these awesome folks who work on it.

Core Web Vitals

What are we talking about here? I think this story starts with core web vitals. Core web vitals is basically Google’s attempt to push the web forward. The idea is, shine a light on it in terms of performance. Then people have motivation to do something about it and improve. Core web vitals is actually used by Google for SEO ranking. If you have bad scores, you might get negatively impacted in your SEO score. You should really worry about getting good scores here. It turns out, there’s a lot of different things that they measure. One of them is essentially this idea of time to interactive, which is that I navigate to a page, and I want to go and interact with a page, how long before I can interact with a page? The idea is that that should be pretty quick. Otherwise, you’re going to lose interest and you go somewhere else.

This is a place where we as an industry aren’t doing very well, we tend to fail this problem. The problem basically comes down to, you navigate to a page and a huge amount of JavaScript has to download, a huge amount of JavaScript has to execute. Then the page becomes interactive. I’m sure you’ve seen this, where you go to a page, somebody sends you a link on Twitter and X, and you click on it, and it’s like these awesome shoes. You have to get these shoes. You want to push the buy button and nothing happens. You click a few times, and nothing’s happened. At some point, you’re like, I don’t need the shoes, and you leave. This has a negative impact on monetization.

Companies should really care about their performance, because you really want to have a good experience. You want to have a website where the moment you see the button, you can click on it. When you click on it, you know something will happen. That is not the world we are today. In the world we are today is we get to see these buttons before they’re ready. On mobile and crappy network, it might take multiple seconds. It’s not unheard of to take 30 seconds, before a site really becomes fully interactive.

The culprit is JavaScript, which is that the amount of JavaScript we’re sending to the browser is steadily increasing. I’m not here to tell you stop writing JavaScript. That’s ridiculous. That’s not going to fly. Also, I’m not saying here that you need to go back to 20 years ago, where we had HTML without JavaScript, because that’s also not going to fly. What it is, is that JavaScript adds interactivity to our websites. This interactivity is something that our users expect. This isn’t about writing less JavaScript, this talk is really about like, how do you ship less JavaScript in the sense that you need this JavaScript, but the user doesn’t need all of it all at once.

The user can’t click on all the buttons in the UI all at once. A user can’t be in every single menu all at once. Why does the code have to be in the browser all at once? That’s the thing that we’re going to talk about. Just if you don’t believe me, most sites fail the core web vitals. You would think that we would do good on this particular metric as an industry because it’s real money to be made, and as I said, if you don’t click on the site, a button, and if it doesn’t update quick enough, then issues arise in terms of profit or sales and conversion rates. You think that we would have motivations to fix this. If you look across the board, it’s red everywhere. Only a few sites like Amazon have the necessary resources to make it in the yellow, and green, a real website that actually has real traffic and has real users is almost unheard of. I’m sure there’s a Hello World website somewhere that the score is green, but I’m talking about real websites that has real money going through the system.

Example

To simplify the problem, I’m going to start with the simplest possible example you can think of, and let’s start with a counter. A counter is super simple. You have a button. You have some initial count, let’s say in this case is 123. When you hit on that button, the count increases. That’s the smallest app you can think of. The reason I like this app, is because there are three fundamental things going on in there. There’s a state, the initial state is 123. There is mutation, which is that listener, when you click on you mutate the state. Then there is a binding that binds the output to the UI. Whatever complicated application you have is just a more complicated version of this.

Fundamentally, it’s all about state, mutation, and rendering. This is not a real-world example because all of those three things are inside of a single component. In real world, the world is a little more complicated. What we do is we’re going to break this up. We’re going to say, the state is declared in one component, the mutation in the other component, and the display in a third component. You can think of it as, what if this mutation is your add to the shopping cart button that’s here on a page. Then the shopping cart display is here on the page and they have to share state, and therefore, the least common route is what gets the state of the shopping cart and the button, and then that state gets passed into all of the required cases.

The real-world apps really look like this. Whether it’s a counter or a shopping cart, is just more complicated version of it. Fundamentally, this is what we have. It looks like this. You have a counter, which contains the state, it’s not visual. You have the action, which contains the listener. You have the count which contains the display.

Still not complicated an app. Let’s introduce a couple more things. Typically, we have some kind of an AppRoot, which is a static piece of code that sets things up, sets out the layout, sets out basic headers and footers, and all kinds of other things. The other thing is that it’s not that the action, like the shopping cart is a direct child of the counter, usually there is some extra wrappers that go in there. The same thing for shopping cart. Shopping cart is not a direct child of where the state is stored, usually there’s extra components that are in the mix.

A more realistic example would be that you have these extra components around you that wrap and pass the data. Fundamentally, you still have the state flowing through the system. A more complicated version would be something like this. I’m going to throw in one more complication, and it’s going to make sense why we’re doing this. That is, you probably want to have a leaf component, in this case, an action item or DisplayIcon. What I’ve done is I made certain of these components in dashed lines, because what I’m telling you is that, these components are fundamentally not needed for the running of the application. They’re just extra fluff that is needed for styling to get the right construct, mental models. They’re not there for the purposes of the application.

The application really just wants to have the mutation, the state, and the render, everything else is unnecessary extra stuff. Because this is unnecessary extra stuff, one should be able to argue that, I don’t need this stuff. As part of SSR, I shouldn’t have to send this stuff over. This stuff is irrelevant. Or, I shouldn’t have to pay for it. If I make a super complicated app that has all of these extra wrappers and leaf icons, it shouldn’t negatively impact the performance of the application. Because at the end of the day, it’s just a counter, but it does. Let’s look into it more.

Now we have this more complex application where there’s all these extra wrappers. Now there’s an icon, which is a leaf in a sense. Again, I’m going to come back that like, fundamentally, the only thing that matters is that mutation changes the state, changes the rendering of the component, and everything else is just extra fluff. This is the stuff I want to load in terms of JavaScript, and everything else on the page should just be pure HTML. It should be not part of the discussion here. If the AppRoot or ActionWrapper or action item become complex in terms of the kind of HTML we’re sending over, that should not negatively impact us. Because, at the end of the day, if I was to build this in jQuery, the jQuery just has a listener on the button and update some DOM. All the other HTML is really just irrelevant to the whole thing.

Hydration

Let’s talk about hydration. What do frameworks need to work? Every single framework in the world requires these three things. I don’t care which framework you use, what’s your favorite, or whatever, these three things is what a framework needs. In order for the framework to do something useful, first, it needs to know where the listeners are. It needs to know their locations. It needs to know what code should be executed when you go and interact with it. Step one, you need to know where the listeners are. Now, you execute the listener. In this case, our listener is increment the counter. The listener basically says, count.value++, or something equivalent to that. Except that, what is the initial value of the count? Where does it come from? Frameworks also need to know the current application state. That’s not obvious.

People don’t really think about it in terms of hydration. The listeners can’t do anything useful unless they have a current state of the application. That’s a requirement as well. The framework not only need listeners, they also need to know the application state. The last thing is, when the application state changes, the framework needs to know what to go and rerender. Because if the listener runs and changes some state, and there’s no rerendering, not very useful. The listener needs the application state, and the application state needs to know about where the component bindings are. If you don’t have this information, the framework cannot do anything useful. I don’t care which framework you have, these things are universally true.

How does hydration get all this information? Somewhere in there, there is a coolant of JavaScript’s main method, where the execution starts. When the execution starts, you’ll usually call some method like hydrate, and you pass in a root component, in this case, the AppRoot. The framework executes the AppRoot, and as the side effect of executing of AppRoot, the framework learns about the counter. Now the framework recurses and says, let me execute the counter. As it’s executing the counter, the framework learns about the state, and about two new components, ActionWrapper and DisplayWrapper.

Now the framework goes and recurses into the ActionWrapper, and it learns about the action. As it learns about the action, it also learns that there is a listener at this particular location. Now the framework knows about the listener. It also knows about action item. It says, I don’t know, maybe there’s more listeners in the action items, or maybe there is state, or who knows what’s in there. I have no choice as a framework to go and execute the action item in order to find out what’s in there.

Then the same thing happens on the right-hand side with the DisplayWrapper, running the display. In that case, the framework learns that there is now a data binding in this particular location. Now the framework knows that the state is data bound to the display, or rather that if the state changes, the display has to be rerendered. Similarly, it has to go back into the DisplayIcon in order to figure this out. If you look at this, you realize, all of this code just eagerly executed on the client. We had this picture of like, really, the only thing I need is the listener, and maybe the display in order to update the UI.

In the process of booting up the framework, the side effect of the booter process, is every single component that was originally as part of the SSR just got executed. There just doesn’t seem to be a way around this.

Hydration Alternatives – Progressive Hydration

People know this. People know that hydration is a problem. I don’t think anybody’s arguing that hydration is great. There’s a lot of alternatives that are proposed to solve this problem. I think we’ve gotten to the point as an industry where everybody’s agreeing, hydration is a problem, so let’s see how we can solve this. Let’s look at the alternatives. One alternative that people propose is progressive hydration. Idea is not necessarily to execute less code, but to prioritize the order of execution.

The way progressive hydration works is, imagine that somehow the framework could know ahead of time that there are going to be click listeners, and therefore the framework is setting up a global listener for clicks. Now, as the hydration is running, let’s say the hydration is taking a long time. As the hydration is running, the user goes and clicks on the particular action button right here. Now, when the framework gets to the counter, it learns about the ActionWrapper and DisplayWrapper. Now the framework has a choice, do I descend down the ActionWrapper path first, and then to DisplayWrapper, or do I do it in the other way? Which way should I do it? Typically, frameworks just do it in the order of declarations.

In this particular case, the framework can say, actually, I have seen the fact that there was a click coming from this particular path, there may be a click listener, I don’t know. Why don’t I prioritize and start processing the ActionWrapper path first, just maybe if I find a listener, and if I find the listener, that I can replay that event into the listener to emulate the fact that the click has happened. A user goes and interacts with the button. That interaction tells the framework, go and prioritize this branch of the tree first, before you do the other. In the case of progressive hydration, you don’t execute less code, you still have to go and visit every single component. That doesn’t change.

The order in which you execute the component has now been changed, optimized, improved, so that there is an illusion to the user that the application is more performant, because the click listener got processed first. Actually, to point out, notice an interesting thing here is that the framework in prioritize processing the action component, the listener, because it knows that the event comes from there, but once the event has come and you have mutated the state, the framework has no idea where that state is used. It needs to go and visit all the other branches and all the other components to see, maybe the state is used over there. Maybe it isn’t, I don’t know. It has no choice but to visit every single thing.

If you look at progressive hydration report card, you see, it’s the same thing. You still have to eagerly download and execute all of this code. Nothing has changed. The performance to the user looks better. In terms of what’s actually happening under the hood, it’s still old parts happening.

Island Hydration

Then you can say, I could do an island. This is popular black Astro.js, and Fresh, and other frameworks. The idea is like, instead of hydrating the whole world, what if somehow I was told that AppRoot is static and irrelevant, and therefore, I’m going to skip that portion and instead start hydrating where it needs to be. The question is, could you make the island smaller? How do you know where to put the island?

One way of looking at it is to say like, the island really has to encompass all of the state. In this particular case, we have no choice but to include the counter as the topmost island, because there needs to be a communication between it. If we make smaller islands, which I’ll show you, it becomes a little more complicated. In this case, you don’t get a lot of savings in this particular example, because there’s just AppRoot, but the AppRoot could be something that’s complicated. Depending on your use case, that actually might be a big saving.

For example, if AppRoot represents a huge blog, and the island represents just the menu system, that’s a huge win. You can get a lot of benefits from this particular approach. In this case, the only thing the island is saving is the AppRoot, everything else still has to be there. Again, I’m pointing out that this AppRoot may be something complicated in the more real-world application, so it might look better.

The alternative would be, what if I make two islands next to each other? You can do that, but now there is a complication, which is that the state is actually outside of the island. If you think about it, an island is really just a small application on your page. Most frameworks have primitives to allow you to work with state within in the application. I’m not familiar much, in terms of frameworks having primitives for talking to other applications on a page. Now you have an inter-island communication problem.

If you want to design your system this particular way, you can certainly do that, but somehow the state has to be passed across. The thing is, while the display was originally together in the same island, I could just use the framework primitives like passing state, updating the state, and everything works. Now that I’m inter-island communication, I cannot use the framework’s primitive to talk to the display, I have to use some other syntax, some other technology to do this. That means I can’t easily move the display from inside of the island to across the island because the communication channel, the API by which I talk to it has to change. That’s one problem. The other problem is, depending on how the state is set up, and different frameworks solve this in different ways, the state outside of the system may or may not be part of the SSR story, more complications.

Certainly, with this particular approach, you can get a lot better. If you look at the report card, you can get 5.5, because the counter has state, and so some magic has to happen over there. You probably don’t need the counter, but you need something. You’re going to need some kind of a thing in that particular location to deal with this particular problem. As you can see, this approach gets you less code at the expense of more complications in terms of like, I need to solve inter-island communication problem, like how do I solve this?

Branch Pruning

The other technology that exists is branch pruning. The idea is, look at this. As a compiler, I’m going to execute the code. As part of the compilation process, I am going to notice that action item is a leaf. In other words, I’m going to notice that there is no listeners, no state, and no bindings in this particular location. Because all of those things are true that it’s basically static in all respects, I can essentially tree shake it away. The framework, I believe, Svelte and Solid knows how to do this. They basically say like, if it’s part of SSR or SSG, then the code that I’m going to send to the client doesn’t need to include instruction on how to generate action item, ActionIcon, or DisplayIcon, because that was already processed.

I know that the information is static, and therefore it will never change. Therefore, we can prune those branches. With this trick, you can certainly get rid of the action item. Again, you’re going to see that you still have a lot of stuff inside of your eager column. The other thing I want to point out is, you can mix and match these. You can have a system in theory that has all of these properties. You can delete more stuff, as you can imagine.

Server Components

The other thing that’s super popular these days is the server components. Server components are interesting in that certain things execute only on a server. In theory, all the static bits can be server only. I say, in theory, because in practice, it’s a little more complicated. This particular diagram is not really true for server components. There is two ways to draw your component diagram. One is logical tree, and the other one is render tree. What this is showing is a render tree. This is how components actually get rendered in terms of how they’re nested inside of a div.

Logically, the rule that the server components have is that you could be in a server side, but once you cross over to the client side, you cannot go back to the server side. How did I get DisplayWrapper and DisplayIcon to be server only, and I’ve crossed over the boundary? The way you do that is through projection. Most obvious projection is children inside of your component. There, you can also have extra attributes like, Icon JSX, that you can just pass in and pass the data that way. You can, if you’re very diligent, do this, that you can remove all of these components in React server components.

In reality, it is unlikely that you’re going to go to the extreme where you’re going to try to project DisplayIcon, because DisplayIcon is multiple levels removed. You would have to do some real trickery in terms of projection in order to get there. Yes, in theory, it’s possible. In practice, the wrappers, for sure, you will be able to get rid of. I’m not so sure about the icons. I’m going to give them a half point each. You can see that the report card for React server components is less in terms of the amount of code that the hydration has to process.

At this point, I think we should really define the word hydration. I know different people have different definitions. I define it in a very unique way, which is that, hydration is the act of running the application code to learn about the application. React server components, the reason why they don’t do hydration is because those components that you see over there, they never make it to the client. The framework does not execute the AppRoot, Action frame or DisplayWrapper as part of client hydration, because that code never executes. That’s why those components are not hydrated.

They still are reconciled when they do VDOM diffing, but they’re not hydrated. I think the distinction makes sense. I think defining the hydration as attaching DOM listeners, I think is too broad, because jQuery qualifies. Most people will agree that jQuery is not hydration. That’s the server components.

Hydration is in Order

The other thing I want to point out about hydration is that hydration is in order, meaning that you can’t just start hydrating something in the middle of the tree. You always have to start at the root. Because if you start in the middle of the tree, that component probably has props from the parent. How is the framework supposed to get the props from the parent? It gets it by executing it first. That parent component, probably also has props and children from its parent, and so on and so forth. As you can see, you very quickly get like, I need to just execute at the root. Hydration has this property that it is in order.

Progressive hydration prioritized which branches we went down. We can never just start in the middle or skip a few things and come back to it later. We always have to start with the root and go to the children. In the case of island architecture, what you’re doing is you’re creating new artificial roots for the hydration to start at. In that particular case, the developer is responsible for doing this by using some annotation or some extra information to tell the framework like, you need to start here to do your hydration. I think it’s super important to really understand this, that it’s in order.

That you can’t skip around. You can’t just start in the middle. That doesn’t work. You have to start at the root. As I said earlier, you can combine all these strategies to come up with something that is an island architecture with React server components with partial hydration, to get the best of both worlds. I don’t know of any framework that can do all of those things. In theory, it’s possible.

Lazy Loading

Now that we covered the hydration and how hydration works and all those things, let’s talk about lazy loading. Why am I talking about lazy loading? Because when you build a big app, we talk about websites are slow. My argument is that the websites are slow because we send too much JavaScript. Let’s talk about lazy loading. Because if you come to somebody and say, my application is slow, what should I do? They’ll tell you, download less JavaScript, execute less JavaScript, problem solved. Lazy loading.

Let’s say, you want to lazy load the display because your thinking goes like this, “I’m going to render a page. Unless somebody clicks on that button, I don’t need the display. I only need the display if somebody clicks on it. Ideally, I would like to be in a world where like, I don’t download that piece of code.” The thing you would do is you want to say, I want to take the display and move it out. First problem is actually, it’s more complicated than that, you don’t want to just take the display, you also want to take the display and everything that’s below it. Because if you just take the display, then you immediately go back to DisplayIcon, as the DisplayIcon comes with you, then that’s not helpful. It’s not just the display, it’s the display and everything it has underneath it.

The first problem you actually have is just figuring out this graph, like where do I put this boundary? In the case of a counter, simple example, in the real world, it’s like a complicated problem. Where do I put the boundary? The second problem is, what components need to go with it? Again, not a simple thing to answer. The way of the lazy loading is, step one, you have to refactor the code by moving the code into the new file. That’s a lot of work. That’s a non-trivial amount of thing. For those of you who have tried to do lazy loading, I’m sure you’re going to agree with me that refactoring is not that simple, because things depend on things. It can be a lot of work to untangle everything.

Now that you have this, what you’re going to do is you’re going to create a dynamic import. That’s the second bit you have to do. This dynamic import is going to be wrapped in some primitive from the framework. That primitive is going to give you a lazy loaded component. Then you take this lazy loaded component, and you put it in another primitive in a framework that basically says, lazy loading happens here, there’s a special rendering process that is involved. That’s a lot of steps. That’s a lot of stuff you have to do.

It gets worse. We just lazy loaded this thing. Now what happens during hydration? You start with AppRoot. AppRoot learns about the counter. Counter tells it about an ActionWrapper. That path goes as we would expect. The second path is DisplayWrapper. You come to the DisplayWrapper, and the DisplayWrapper says, I’m lazy loading a component.

The framework is like, that’s nice, but is there a listener in there? I don’t know. How do I find out? I download it and execute it. All this work you have just done, just gets undone for you by hydration. I’m not saying lazy loading doesn’t ever work. There are many cases where lazy loading works, for example, on route boundaries, or for components that are currently not in the render tree. I’m saying that if a component is in the render tree, then lazy loading does not work. Because you go through this thing, and a hydration will make sure that that code gets loaded. Now, you made the situation actually worse.

Because now the hydration will actually have to pause, download the code, and then continue. It gets even crazier, many frameworks the way they actually do this, is that they keep hydrating until they get to the DisplayWrapper. Then they see there’s a lazy loading boundary, and they say, let’s give up, wait for that to resolve. When it resolves, it restarts at the AppRoot, and see if it can go further. I’m not saying all frameworks do this, but many do it this way. Lazy loading is only useful for components that are not in the render tree. That is not obvious to people. Most people are just like, lazy load everything. It works. Look, there’s a primitive. Solve your problem. Not really. This is why I say that hydration is a saboteur of lazy loading.

The other place you could do lazy loading is you say, fine, forget the components. How about I do it on the events? What if I do it on the events? Can I take the onClick that I have here, and lazy load that? Same thing, pull it out into a separate file, and now you have a problem. You see, this ActionClickHandler needs a set count. A set count internally needs to know the state of the system, which is 123. How does it get that information? When it was inside of the button inside of the component, it closed over that information.

The closures just closed over that information. Now that it’s in a separate module, it can’t close over it. What we have to leave behind is a trampoline function. A trampoline function serves two things. One, it tells the framework, there is a listener here, the listener didn’t disappear. Second, the trampoline function goes and collects all of the things that you need to close over, and then lazy load the code when you go and interact with it. Oftentimes, the trampoline function actually might be bigger than the actual code that you’re trying to lazy load. Because in case of a click handler, there’s not much to lazy load here. You do this, you do this, you’ve got the trampoline function.

The trampoline functions are a problem in the sense that, even if you lazy load the listener, there needs to be something left behind that closes over the state. Without it, it just won’t work. I call them trampoline functions.

You did all this, and now you have a new problem, and that is prefetching. Now that you lazy load everything, suppose you could somehow succeed in lazy loading. I just showed you that you won’t succeed. Let’s suppose you succeeded somehow. Now you have a new problem, which is that you are on a mobile device, everything loads fast, and you click on a button, and now you have to wait for the network to go and fetch the lazy loaded code and bring it back. Then you get the behavior that you want. If you had lots of small, lazy loaded chunks, there might be a waterfall effect that happens with it.

The way we deal with this is through prefetching. Prefetching basically tells the browser, “I know I don’t need this code yet, but I may in the future. Why don’t you go and start downloading this code on a lower priority than everything else. When you download everything, just keep it in the cache, and then I may or may not ask for it. Make sure it’s there.” The problem is that, in most frameworks I am not aware of having any prefetching API. Let’s say you lazy load this particular listener, the click listener, how do I just tell the framework that that particular chunk needs to be prefetched? By what mechanism? What API do I call to tell the framework, make sure this thing is ready so that the user when it clicks on, there’s no delay?

Most frameworks will have prefetching, and the automatic lazy loading of route boundaries. If I go from route A to route B, many frameworks will prefetch the route transition, which is great, but they won’t do anything for the code that I have manually refactored for lazy loading. That is up to me. The problem is, the more lazy loaded chunks I have, the more chunks I have to tell the system to prefetch. The whole just bookkeeping, of keeping track of which chunks are where, and when do you download them, and how do I make sure that I’m not missing a chunk, or how do I even know what the chunk name is? When I lazy load the thing, the bundler will munge this and produce a file name that contains a hash in it. How do I tell this hash to the framework to prefetch? All of these problems are problems that frameworks don’t really solve. You’re on your own.

The frameworks say, you’re too slow, lazy load. Do you have a solution? It’s on your own, figure it out. Lazy loading only really works when a component is not in the render tree, which means it’s great for routes and it’s great for modals. When I click on a button and a modal pops out, all the modal stuff, lazy load it, great. That works just fine.

Resumability

Let’s step back and say, could we have a completely different solution? Could we just eliminate this hydration problem? Because the dehydration is the source of this all, could we make this just go away? Let’s review back and say, what do frameworks need? All frameworks, as I said, need the listener location, the application state, and the components. There’s no way around this problem. The way we get this information with hydration is we get it by executing the application code. That’s what defines the word hydration.

This is why I define it in this particular way. Because the way you get this information is you execute the application. The execution of the application is what ruins your day, because it means you have to download all the code, and you have to execute all the code. The hydration, the way it works, is that you have a server-side rendering or SSG, so the build time or runtime doesn’t matter. Inside of it, you take your code, you execute it. The act of executing of the code gives you the location of the bindings, the location of the state, and the location of the listeners. You then take this particular thing and you turn it into HTML, and the browser gets the HTML.

The browser is like, great, let me boot up the framework. I need that information. What the browser does is it says, let’s grab the code. Let’s execute the code. As the process of executing the code, I’m going to learn about the bindings, the state, and the listeners, and now the application is working.

Let’s look at resumability. We start with SSR, SSG just like before. Just like before, the framework runs all of this code, nothing’s changed. Just like before, we produce HTML. What if inside of the HTML, we also added the information to the framework about where the bindings are, what is the initial state of the system, and where the listeners are? If we added all this information, and we’re not adding the source code, we’re just adding the metadata about where the stuff is. We’re just adding the metadata to it.

Then on the client, the client wakes up, the framework wakes up, and it just has the data, nothing to do. The data got transferred as part of the HTML. What you’re avoiding is this part right here. It doesn’t look like a big part on my slide here, but this is the core problem of performance. Download and execution of that code is really what’s making everything slow. If you can skip that bit right there, then all of a sudden, you have a huge benefit. Hydration basically says, the first block here is supposed to represent the HTML. You download the HTML, then you download the code, then you execute the code. Then you do reconciliation. You figure out the delta between the VDOM you got and what you actually got inside of the HTML.

Resumability just gets, here’s the HTML, and you’re done. You don’t have to do anything. Until there’s an interaction, then you have to do stuff. There’s nothing you have to do. You don’t have to eagerly download the code. You don’t have to eagerly execute the code. You don’t have to eagerly reconcile the code. Because the framework already knows where the listeners are, it already knows what the state of the system is, and it already knows where the bindings are, so we can just skip all that.

There are several frameworks that do this. Qwik is one of them. This is the framework I work on. There’s Marko from eBay. Marko has been around for a long time. Recently, in version 6, they just added resumability. We’re not the only ones who do this. More importantly, Google has a framework called Wiz. Wiz has been around for about 10 years. It depends how you define the word resumability, you can argue whether Wiz is resumable or not.

Fundamentally, the idea on a high enough level is the same exact thing, is that you don’t eagerly want to execute code. Wiz is actually used in Google Search, Google Photos, and few other Google products. I don’t know if you notice, but Google Search is pretty fast. As a matter of fact, have you ever gotten to the situation in Google Search, where you type stuff and hit the search button, and a search button wasn’t ready? Have you ever had that experience with Google Search? The answer is no, you never had that experience, because Search does not do hydration.

As a result, it’s immediately ready for you. In a case of an application like this, what information gets serialized as part of the resumability? The answer is, the green arrow and the red boxes. That’s the information that gets serialized. The big difference on the report card is the eager part of the application is nothing, there is no eager execution whatsoever. When you do actually finally interact with the application, the only thing you need is the listener, not even the action component, just the listener inside of the action component. You are really surgical about it. You’re like, I just need the function, I don’t need the whole thing, the whole world. You may or may not need the display. I’m going to give it a half a point.

The reason I’m giving it a half a point is because it depends how display is implemented. If the update requires no structural change to the DOM, only update of an attribute or a value, then in that case the system doesn’t even need the display, because the system knows that this value is directly bound to this attribute and a DOM. If the display actually changes structurally, then you need the display component as well. Notice what you don’t need, you don’t need the parent of the display, which is the DisplayWrapper. You don’t need the child, just the display action, because those things are static, and the system just automatically removes this information. Many frameworks know how to do this already, except for one problem. For example, Svelte and Solid will do exactly this on interaction, except that in order for them to learn about the application, they have to execute the application. Once the application is executed, they can do this.

The initial one is, where did you get bitten? Like, how does the framework learn about it? It learns about it by executing everything, and that’s why you get. Resumability is a much better approach, because there’s nothing that has to happen eagerly. Now imagine you have your application that has nothing that you have to happen eagerly, wouldn’t the application start up faster? Of course, it would, because there’s no JavaScript you’re going to have to execute or download. Then you just have the problem, there is the actual code that needs to run when you go and interact with the application.

Performance Optimizations

If you think about performance optimizations, there’s a lot of ways to make application slow. Fundamentally, it falls into three main categories, load less code, do less, and don’t duplicate work. Load less code is lazy loading. Do less is lazy execution. Don’t duplicate work is basically what a lot of frameworks have, which is memoization. Use memo or something like that.

What if you had a framework, which basically says, everything in a framework just works this way. As a developer, you don’t have to think about lazy loading, lazy execution, or memoization. It’s just the way the framework is set up, all of those bases are just covered out of the box. You don’t have to think about. The end result would be like, you could still shoot yourself in the foot in other ways, but it wouldn’t be those three ones. Those are the main ones that usually people will have to worry about.

Qwik Insights

Now we get to the next problem, which is prefetching, which is that, if Qwik is lazy, and Qwik doesn’t download the code until you click on the button, then you have the same problem again, of prefetching. How do you know what code needs to prefetch to the client? Because otherwise, on first interaction, you’re going to be slow, because you’re going to have to go through the network to get the code that you need, one. Two, you may generate lots of waterfall requests where you load the code, and the code says, now I need more stuff, and I need more stuff.

Qwik comes with something called Qwik Insight. What Qwik Insight does is it monitors your application as it’s running, and it collects statistical data that’s completely anonymous. The only thing we collect is a timestamp. Each function, we call it a symbol, has a hash. All we need to know is the hash of this particular function, and when exactly it happened. From this information, we can answer two very important questions. Question one is clustering. For example, you can see this correlation matrix that I have over here, and I’m hovering over a specific pixel. This pixel is telling me that if the system needs the first symbol, then there is a 19% chance you’re also going to need the second symbol. With that information, the system can come up with optimized bundles that basically says, “Take all these symbols, put them in this bundle. Take these other symbols, put them in this bundle. Take these symbols, put them here.”

Question one, to answer for you is, how do I colocate code into bundles to minimize waterfalls. Qwik Insight can do that based on real behavior of real users. If you have an app, and there’s a big call to action button, and everybody clicks on that, that is going to be together with all the code that requires to be run. If there is another button that nobody almost ever clicks on, then you shouldn’t put that code together inside of the main one, because that’s unnecessary code that you’re sending across. That’s one. The second question you want to answer is, in which order should the code be downloaded? Because even if the button is not used, or very rarely used, you should still load it so that in case you go and interact with the page, you will have a cache hit.

The Qwik Insight produces clustering information, which means, minimize the waterfalls, and also produces a list of bundles that are needed for a particular route. Now, when you create a next build of your application, that information is fed into it, and the new version of the application now, all of a sudden, behaves a lot better over time. We collect all this statistical data about how the different parts of the system are used. The side effect of this is, as a developer, you can also look at your code, and you can see like, which pieces of code are used a lot in production, and which almost nobody ever uses? Is it safe to change this particular function? I don’t know. Let me look, nobody ever uses it. Sure, you can delete it. Get rid of it. Nobody cares. Or it’s a very hot path that matters over here.

Conclusion

The argument I’m trying to make here is that if you have a resumable system, typically when you build your application, you start developing and the performance slowly gets worse. At some point, you get fed up and you’re like, I need to do something about it. Then there is a period of a couple of sprints, when you’re just optimizing everything. You’re putting memos everything, you lazy load everything, and the performance improves to the point where you’re like, it’s good enough. I don’t have to do anything more. You start working on features, and the performance starts going down again.

At some point, you’re like, this is too slow. I have to do something, and you have an optimization sprint to fix it. The nice thing about systems that are resumable and have lazy loading built into the framework, is that you don’t have to do those optimization steps. At least not in those three categories that I talked about. You can just develop your code, and you know that out of the box without any effort on the developer’s part, you get the optimal lazy loading, you get lazy execution, you get optimization of bundles using statistical models to figure out in which order it should be downloaded.

Therefore, you don’t have to spend time optimizing your application and you’re going to be able to finish your app faster. That’s my argument is that those three categories of problems that a lot of developers have to solve when they use [inaudible 00:48:02], those three sets of problems basically disappear, if you think about it. In other words, my personal opinion is that lazy loading is a property of the framework, and it should not be given to the developer.

Because I have shown you that developer doesn’t really have both the freedom to do things that they would like, because hydration is in the way, but also the knowledge in a sense of like, how do I know where the best location for lazy loading is? That knowledge is gained through profiling, not through somehow looking at the code.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Platform Engineering – Making Other Teams 10x Better

MMS Founder
MMS Jessica Andersson

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, I’m sitting down across many miles with Jessica Andresson. Jessica is in all the Swedish summer. Jessica, welcome. Thanks for taking the time to talk to us today.

Jessica Andresson: Thank you for having me.

Shane Hastie: My normal starting point is who’s Jessica?

Introductions [00:27]

Jessica Andresson: Yes, I wish I had a really good quick answer to that one. So I’m currently in between jobs intentionally, but I identify as a platform engineer and also a leader of platforms. I’ve been doing both and looking forward to getting a little bit hands-on again, come this fall. I’m very passionate about enabling teams and accelerating developers and trying to empower teams to deliver a lot of value. I think that is something where you can have a big impact. That’s something that I’m very passionate about. I’m also very involved in our local communities around cloud native, and I’m also a CNCF ambassador. It’s a lovely community. I’m really happy to be a part of that and be able to help that community evolve as well.

Shane Hastie: Jessica, one of the things that we were chatting about before we started was platform engineering as empowering and enabling others. You in fact said this is the opportunity to be the 10x engineer. How does that come about?

Platform engineering is about empowering others [01:29]

Jessica Andresson: Yes, I think that it was a lot of talk many years now that everyone should strive to become the 10x developer, and it was a lot of pressure and did it mean to build more code? Did it mean to ship faster, or whatever did it mean? And there was someone writing on Twitter a while back that being a platform engineer was the closest I ever felt to become a 10x developer, and this resonated so much with me. So I think for my own leading star, it’s not so much about becoming 10x of myself.

It’s about enabling 10 other developers to help them provide the value they want to do faster and more efficient and removing a lot of their pain points. And I think being on a platform team and working with platform engineering, that’s the chance you have to make an impact on that. As a platform team, we have always had the goal of empowering other teams to focus on things that they do to create value for their company or the organization. And so being able to help them to get there is really cool.

Shane Hastie: So what does a good platform engineering team feel like from a culture perspective?

Jessica Andresson: They are perceived from others as very approachable and helpful, I think. It’s always a little bit of track of being helpful because you could always end up being a little bit too helpful and spend all your time just answering questions, et cetera. But if you don’t have that approachableness and having people come up to you, it’s very hard to understand the problems of the other developers and the pain points that have, and they need to be able to feel like they are not asking these stupid questions or at least that they can come to you with these stupid questions and not get dissed or something, because they obviously don’t know the answer, and if you’re going to help them, if you’re going to empower them, you need to help them get to the point where they can solve themselves.

When it works really well, you should almost not notice that the platform team is there because the development teams should be able to do all the things they want to do themselves. I think it’s very important to have this non-blocking self-service mindset. And so I think that a really good platform team is probably perceived as helpful but also not really there because everything’s running so smoothly.

Shane Hastie: How do we create that platform team effectively?

Creating effective platform teams [03:44]

Jessica Andresson: I think you have to start somewhere. Everyone has to start somewhere. And when looking back on my own journey at Kognic, I realized that starting somewhere means that there is already a platform, because there’s always an implicit platform even if you didn’t plan for it, because if you’re running software in production, you have a need for some certain common needs such as build time, CICD, runtime, observability, et cetera. So those that did not work with the platform explicitly, they try to make the things work out as best as they could, and once it worked good enough, they went on to building the product as they were trying to do from the start.

And so as a platform team, you have to figure out what is there, and then you can try to transform it and stabilize it and make it work more streamlined. And from there try to go on and build it up. And when it comes to culture and what leads us to a trust and adoption of your platform, I think, is that you need to show deep development teams that you can solve their pain points. You need to show them quite early on that you are there to empower them and not to restrict them or be the police of what they can do and not do. So making it easy for them to do the things they want to do while also making sure that you aren’t blocking them so that it can move forward. And then you have to take one segment, make it work well, and then you take the next one and then you keep going. It’s an endless repeat on that one.

Shane Hastie: So how do you build these strong trusting relationships?

Building strong trusting relationships [05:24]

Jessica Andresson: Actually, my partner Carl Engström, he used to phrase that trust is a currency and that you have to treat it as such. And so in turn then you have to use trust in order to gain adoption. And if we look at trust as something that we can earn or spend, we can look at what things can we do in order to earn trust and what things will we do that will cause us to spend trust, and then we can keep a track of that balance and try to make sure that we never reach a negative balance on our accounts.

And so some of the things that I’ve identified as something that can help you earn more trust is removing pain points. I know I keep coming back to it, but it’s so easy to forget that in the daily life, the developers do things that they always think is slightly painful, but it’s not painful enough that they are going to raise a flag or come screaming at you whatnot, but it’s still slightly painful, slightly annoying, and it’s repetitive. So if you can find those and you can remove those, then that will create more trust from your developers and build that relationship.

We talked about being approachable and helpful and I think that if you keep being approachable and you’re keeping helpful and not be mean to people that ask stupid questions, they will keep coming back, and they will go back and they will tell their team that, “Oh, I had this issue and the platform team was super nice and helped me out,” and the next time someone else in that team has a problem, they are not afraid to reach out because they knew that their teammate got a nice reply. And it doesn’t mean that you have to always jump very fast trying to solve everything without thinking. It’s just about, “Hmm, could you explain more about your problem? Ah, interesting. Now we’ll look into that. I will get back to you,” and then you get back to them when you have answer or not. It’s about being treating all with kindness.

I think also if you are proactive while being approachable, you would probably hear things. The team stopped explicitly saying, “This is a problem,” but you realize I can do something to improve this situation. And if you proactively do that to improve this situation, that will also help people that trust and create a better collaboration. And I think, so easy to forget, but understanding the team’s perspective. They don’t stick with the same technology as you do day out and day in. They have other things in their plans that they want to achieve. You’re measured on different success rates. So understand where they come from, understand their perspective, and what they know and what they don’t know, and that will help you have a common language when you talk, and that’s very important in order to create a good collaboration and build trust.

And if we’re looking at what you can do in order to spend trust, which I think is also easy to forget that there are actually things that will decrease the trust that the team’s have in you. And I think that probably this one to understand it’s like enforcing processes. In my experience, all teams hate when you try to tell them what to do, they always want to try to figure out themselves. It comes back to autonomy. I think that teams like to feel like they have the chance to decide themselves, but when you enforce a process or you tell them you have to do, like they see you remove that choice from them, and you’re pushing them into a box that they did not know that they wanted to be.

Of course you can counter that by anchoring decisions and all those things, but still enforcing a process will cause you to spend some amount of trust. If it’s much or little, that depends on how you do it. If you introduce blocking processes, I think you are also spending trust. If the teams can’t do something themself, they have to go to a gatekeeper in order to get it forward, that’s going to cost you. And it feels given, but worth mentioning.

Platform teams in general spend a lot of time on migrating and upgrading. We move it from one tool to another, we move it in a new version, some standard got deprecated, we have to get a new standard and whatnot. All migrations cost you, depending on how well you do it, depends on how much you spend on it. The best migrations are the ones that the teams don’t have to care about at all. You sometimes you can get away if they feel like they’re getting something extremely valuable out of migration, but it’s still like a small cost, I would say. And I think diverse spending is probably assumptions. Like we are really good at our jobs, we know what the teams should want.

So if we make an assumption that they really want this thing and then we make it happen and they didn’t really want it, then that’s going to cost us a lot. And so we’re back to anchoring change and understanding their perspective and all those things. So I think understanding the team’s perspective and assumptions goes hand-in-hand.

Shane Hastie: You mentioned that platform teams have different success measures to product development teams. What are the important success metrics or measures that platform teams should be looking to?

Success metrics for platform teams [10:24]

Jessica Andresson: So I think that it’s very easy to look at the DORA metrics when it comes to this. It’s hard to implement, but it’s easy to look at, because you have to figure out what those means to you. And so we have four main metrics or core metrics that I usually get to look at. So that is deployment frequency, lead time to changes, mean time to recovery, and change failure rate. And exactly how you measure those will be unique to your company.

There are some guidelines and some standards that you can look at, but you will probably have to make some slight adoption to your organization. But I think the important thing is not that you measured exactly the same as everyone else, it’s that you measured exactly the same all the time for yourself and that you compare against yourself over time, because that’s how you will figure out if you’re making any difference.

I heard of a lot of people measuring things such as developer efficiency and developer happiness and those kind of things. From my point of view, it’s probably not that easy to do, but we were playing around a lot with the idea of continuously sending out a survey. Kognic uses Peakon for employee satisfaction and those kind of things, and we had the idea that maybe we could send out something similar maybe every half year or something. We never got there though, because other things were very important.

Shane Hastie: The challenges in software engineering in my experience and what I see and hear are almost always people-related, not technology-related. How do we as leaders in that space engage people well to reduce those challenges as much as possible?

Most of the challenges we face are people challenges, not technology [12:13]

Jessica Andresson: As you say, the people problem is always the hardest one, and I find it very interesting to hear when I talk with other friends outside in other organizations is that we all seem to struggle with quite similar problems regardless of what product we are building. And so it’s often comes down to communication, alignment, and all those kind of things that usually are the same that show up frequently. And I honestly don’t have a good answer, I’m sorry, but I’m thinking working on making sure that everyone knows what we’re trying to achieve, what direction we’re supposed to go, and what is the most important right now, like alignment, that’s something that you can start with.

And then working on how you communicate around alignment so that people have the right information when they need it and then they are not flooded with too much information that doesn’t change anything for them so that they feel like … Because you can end up in a state where you feel like a lot of things are changing while they really are not because of how you communicate, and the other way around where people feel like something needs to change but no one is doing anything, and that’s because maybe you’re having those meetings in a different setting with different group of people and you don’t communicate to others. We’re on it but we don’t have the answer yet. So from my point of view, those are probably some of the hardest questions, and I think we will struggle with them.

Shane Hastie: A lot of our audience are people who are moving into leadership roles for often the first time. You’ve spoken about being an engineering leader and also being the practitioner. What are the big steps? What are the big changes when you move from that individual contributor practitioner into the engineering leader space?

Advice for new leaders [14:03]

Jessica Andresson: For me, I was very lucky because I had a chance of doing very gradually. So when I joined Kognic about four years ago, I was the first platform engineer/leader. There was no team. And so I started out very hand-on but also we trying out thinking about a strategy, trying to build a plan, hiring a team, and then getting started with a very small team while still working hands-on on the platform. And then over time, I took on more responsibilities and in the end the team kindly told me that they were not expecting me to contribute to the backlog anymore, also known as “Please don’t mess anything up.” And I think that was a very nice way of getting to know the different aspects a little bit at a time. And If anyone has a chance to do that, I probably would recommend it because I really enjoyed that process.

And I think something that I struggled with a lot at this pace when I was no longer hands-on in the team but still very much invested into what was being built technically is to not be able to have the full picture in my mind, because I was used to know how every system worked and how all things were connected together, and to go from not being able to do that while still being involved in the product and where it was going and what problems we were solving was very hard for me.

And I think also, because I’m one of those people that tend to have a lot of control over things around me, I know where all the things are in the house, I can pull up a lot of details from my mind, and not being able to have all that context was a bit of a struggle. But that’s probably very personal for me. I think everyone will discover their own struggles. There’s also so many things that you don’t think about, like how doing salary review is probably more of a salary season than a salary review period for instance, and all those things that tend to be a lot more around the practical details of running a team of employees rather than building a platform. That’s probably one of the reasons why I’m moving back to hands-on instead.

Shane Hastie: What advice would you give somebody who’s stepping into that space?

Jessica Andresson: Probably try to figure out what you want to achieve. There was a great talk at QCon London 2024 regarding how to drive change without authority, and the speaker said that he wanted to get into management because he wanted to tell people what they should do, and then he realized that’s definitely not what I should do. I should rather try to drive the change and try to influence people, because we are back to telling people what to do, it’s usually not successful.

And I feel that if you try to get into leadership or a platform team or engineering in general, figure out why you want to do it and what you want to achieve with it, and then double check if your perception of what it means matches reality. What I did before taking on this role was that I had a contact that had done a lot of technical hands-on herself previously, and then moved into an engineering leadership position.

And so I asked her out lunch and asked if I could ask some questions about that transition, and ask all those concerns like will you feel like you don’t connect to the lingo after a while? Like you lose connection to the technical aspects? Is it fun? Is it a lot of questions about handling conflicts? How do you handle those things? Because suddenly you have to do the things that are not fun as well because there’s a lot of things in leadership that are not super fun. They might even be very hard, such as if you have big conflict or something is not working out the way it’s supposed to do and no one else is there to handle it, suddenly it’s your problem. When you are an IC, in most cases, someone is there to take care of those things.

So I had the chance to talk to her for a long time and ask all my questions and try to figure out what it would mean to be a leader and to probably voice some of my concerns and get some advice around that. And that’s probably a really good advice if anyone is looking to move into engineering leadership, to try to find someone and have that conversation.

Shane Hastie: Where is platform engineering heading? What are the trends? What’s next?

Trends in platform engineering [18:38]

Jessica Andresson: That’s a very good question. I think it’s just wasted hype, like the highest hype level. I know for sure that people have been doing platform engineering for a very long time, but I think it’s just the recent two, three years maybe that we had the words for it so that everyone could share the same words for what they were doing, because a lot of people have been working with this for a very long time, and I’ve had knowledge sharing with other companies back before we even called it platform engineering. But I think in the future, it will probably have to try to deal with the awkwardness around what does platform engineering mean? What does DevOps mean? What does empowered teams and autonomy and all those things mean? Because right now, I’m a little bit worried that platform and engineer will follow the same road as DevOps did.

That means that you have platform teams, but they are just the new operations team with a fancy title, because that’s a lot of what I feel happened with DevOps, to be honest. I think a lot of companies will be able to drive platform as it’s meant to be. That means like internal developer platform, empowering teams to reach their goals, but also worried that a lot of companies will do a little bit of that and a lot of operations, because that’s what they’re used to, that’s what always worked, and you always have these things that you don’t have a better solution for, so why not just throw it on the team that seems that they get all those kind of things and that. Used to be the DevOps teams, in quotes, and now it might be the platform team, I worry.

Shane Hastie: So, hold the space carefully.

Jessica Andresson: Yes, I think so. And you need strong leaders with a plan, and leader in this case might be an engineering manager, it might be a product manager, it could be anyone, but you need strong leaders that can fight for maintaining the ID of running a platform team with the goal of empowering development teams. Because if you don’t have that, you will end up being the sink where all the things go.

Shane Hastie: Jessica, a lot of good advice and interesting ideas here. If people wanted to continue the conversation, where can they find you?

Jessica Andresson: They can find me on LinkedIn and X and Bluesky. Mostly I’m solidtubez. I think we’ll add a link somewhere because that’s hard to spell. I’m happy if anyone would like to reach out. I love chatting about these topics, and I would really love to hear what everyone else is working on as well because these are a lot of my ideas. I think there are a lot of synergies with others, but everyone has their own flavor and you always have to adapt slightly to whatever situation you’re in. So I think everyone has a unique story to tell about this.

Shane Hastie: Thank you so much.

Jessica Andresson: Thank you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ROSEN, LEADING INVESTOR COUNSEL, Encourages MongoDB, Inc. Investors to Secure …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

#inform-video-player-1 .inform-embed { margin-top: 10px; margin-bottom: 20px; }

#inform-video-player-2 .inform-embed { margin-top: 10px; margin-bottom: 20px; }

NEW YORK, July 25, 2024 (GLOBE NEWSWIRE) — WHY: Rosen Law Firm, a global investor rights law firm, reminds purchasers of securities of MongoDB, Inc. (NASDAQ: MDB) between August 31, 2023 and May 30, 2024, both dates inclusive (the “Class Period”), of the important September 9, 2024 lead plaintiff deadline.

SO WHAT: If you purchased MongoDB securities during the Class Period you may be entitled to compensation without payment of any out of pocket fees or costs through a contingency fee arrangement.

WHAT TO DO NEXT: To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email case@rosenlegal.com for information on the class action. A class action lawsuit has already been filed. If you wish to serve as lead plaintiff, you must move the Court no later than September 9, 2024. A lead plaintiff is a representative party acting on behalf of other class members in directing the litigation.

WHY ROSEN LAW: We encourage investors to select qualified counsel with a track record of success in leadership roles. Often, firms issuing notices do not have comparable experience, resources or any meaningful peer recognition. Many of these firms do not actually litigate securities class actions, but are merely middlemen that refer clients or partner with law firms that actually litigate the cases. Be wise in selecting counsel. The Rosen Law Firm represents investors throughout the globe, concentrating its practice in securities class actions and shareholder derivative litigation. Rosen Law Firm has achieved the largest ever securities class action settlement against a Chinese Company. Rosen Law Firm was Ranked No. 1 by ISS Securities Class Action Services for number of securities class action settlements in 2017. The firm has been ranked in the top 4 each year since 2013 and has recovered hundreds of millions of dollars for investors. In 2019 alone the firm secured over $438 million for investors. In 2020, founding partner Laurence Rosen was named by law360 as a Titan of Plaintiffs’ Bar. Many of the firm’s attorneys have been recognized by Lawdragon and Super Lawyers.

DETAILS OF THE CASE: According to the lawsuit, throughout the Class Period, defendants created the false impression that they possessed reliable information pertaining to the Company’s projected revenue outlook and anticipated growth while also minimizing risk from seasonality and macroeconomic fluctuations. In truth, MongoDB’s sales force restructure, which prioritized reducing friction in the enrollment process, had resulted in complete loss of upfront commitments; a significant reduction in the information gathered by their sales force as to the trajectory for the new MongoDB Atlas enrollments; and reduced pressure on new enrollments to grow. Defendants misled investors by providing the public with materially flawed statements of confidence and growth projections which did not account for these variables. When the true details entered the market, the lawsuit claims that investors suffered damages.

To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email case@rosenlegal.com for information on the class action.

No Class Has Been Certified. Until a class is certified, you are not represented by counsel unless you retain one. You may select counsel of your choice. You may also remain an absent class member and do nothing at this point. An investor’s ability to share in any potential future recovery is not dependent upon serving as lead plaintiff.

Follow us for updates on LinkedIn: https://www.linkedin.com/company/the-rosen-law-firm, on Twitter: https://twitter.com/rosen_firm or on Facebook: https://www.facebook.com/rosenlawfirm/.

Attorney Advertising. Prior results do not guarantee a similar outcome.

——————————-

Contact Information:

        Laurence Rosen, Esq.

        Phillip Kim, Esq.

        The Rosen Law Firm, P.A.

        275 Madison Avenue, 40th Floor

        New York, NY 10016

        Tel: (212) 686-1060

        Toll Free: (866) 767-3653

        Fax: (212) 202-3827

         case@rosenlegal.com

         www.rosenlegal.com

#inform-video-player-3 .inform-embed { margin-top: 10px; margin-bottom: 20px; }

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Team Health Checks Help Software Teams to Deliver

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

In healthy software teams, people feel psychologically safe to solve problems and contribute, Brittany Woods said in her talk at QCon London. She presented how they do team health checks and the benefits that it has brought them.

The biggest indicator that a team is healthy is the psychological safety of those within the team, Woods said. If not every member is feeling safe within the team to contribute or voice their opinions, it’s not possible for the team to be healthy, she added.

Woods mentioned that this is particularly important in teams where there are mixed skill sets or varied experience levels. In both cases, there is an unwritten power dynamic, so ensuring your team remains a healthy and collaborative team where all members feel safe to solve problems and contribute is very important.

Team health checks are a great way to check in with your software team and get a pulse on how things are going from their perspective, Woods argued:

In my current role, we work with our agile coaches for these health checks, aiming to do them once a quarter.

Woods mentioned that during the health checks, there is time for the team to share things that they believe to be opportunities for improvement along with ideas they may have to solve any identified issues. Their agile coaches facilitate these sessions, making them fun and engaging for the team.

They typically last two hours and they aim for an in-person session, Woods mentioned. Should they do a remote session, they try to ensure all members of the team are remote for a consistent experience across the team.

A typical format would include the following:

  • Before the session, they used a survey tool that showed the historical trends and aggregated trends on team health measures, like feeling able to experiment, feeling supported, enjoying the work, feeling trusted, etc. This gave them a baseline of how the team is feeling about the work they are doing. They do this for each health check so they can see if scores are improving or declining.
  • At the start of the session, they do an icebreaker to get the team engaged – this can be a game, a fun icebreaker question, or for in-person experiences, they often do the LEGO duck building exercise. This consists of taking 6 bricks and in 60 seconds building a duck. It’s always exciting to see how different people interpret the duck differently.
  • The Introduction – the agile coach or facilitator sets the stage for what the session will look like. As an aside, having someone external to the team facilitate the session is important to ensuring a safe and unbiased environment for the team. This also allows the engineering leader to participate as a member of the team.
  • The format of the sessions can vary depending on the exercises that would be most valuable for where the team stands in their feedback.

Team effectiveness is very influenced by the overall health of a team, Woods said. From the team environment to quality measures and metrics, team health can either hinder your team’s ability to deliver on their commitments – software or otherwise – or help it, she concluded.

InfoQ interviewed Brittany Woods about doing team health checks.

InfoQ: How does the format of your team health checks look?

Brittany Woods: In our most recent session, we did the following:

  • Walk through the survey report, pointing out areas where scores had changed from the previous survey. There was also some unstructured time planned in which each member of the team could review/reflect on the results and put in some sticky notes of their thoughts. We use Miro to collaborate and put in stickies. We then broke out into smaller sub groups and discussed what patterns we found in the data.
  • After discussing the survey, we moved into a lean coffee session. Each member came up with topics that they thought would be good to discuss and we voted on those that the majority felt important to spend time on.
  • Next, we did an exercise to give ideas on what a good first thing to solve would be when thinking about the feedback in the survey and how the team could improve.

These activities could look different depending on what things are valuable for your team at that point in time.

InfoQ: What have you learned about team effectiveness?

Woods: I’ve learned that to help be a driver of effectiveness, it’s important to foster safe and inclusive environments where there is space to learn, share, and grow.

When every member of the team is able to learn, share, and grow, your team is, in my opinion, at peak effectiveness and in a headspace where they can deliver their best quality work in the best environment possible for them. At that stage, the real challenge begins of continuously ensuring that the team is able to stay in that space.

As leaders, we have to constantly be engaged in building a safe and collaborative environment for our teams.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Launches Open-Source Agent for AWS Secrets Manager

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Amazon Web Services (AWS) has launched a new open-source agent for AWS Secrets Manager. According to the company, this agent simplifies the process of retrieving secrets from AWS Secrets Manager, enabling secure and streamlined application access.

The Secrets Manager Agent is an open-source tool that allows your applications to retrieve secrets from a local HTTP service instead of reaching out to Secrets Manager over the network. It comes with customizable configuration options, including time to live, cache size, maximum connections, and HTTP port, allowing developers to tailor the agent to their application’s specific requirements. Additionally, the agent provides built-in protection against Server-Side Request Forgery (SSRF) to ensure security when calling the agent within a computing environment.

The Secrets Manager Agent retrieves and stores secrets in memory, allowing applications to access the cached secrets directly instead of calling Secrets Manager. This means that an application can retrieve its secrets from the local host. It’s important to note that the Secrets Manager Agent can only make read requests to the Secrets Manager and cannot modify the secrets, while the AWS SDK allows more.

A respondent on a Reddit thread explained the difference between the agent and AWS SDK, which, for instance, allows the creation of secrets:

This one caches secrets so that if the same secret is requested multiple times within the TTL, only a single API call is made, and the cached secret is returned for any subsequent requests.

In addition, on a Hacker News thread, a respondent wrote:

If I looked at what this does and none of the surrounding discussion/documentation, I’d say this is more about simplifying using Secrets Manager properly than for any security purpose.

To use the secret manager “properly,” in most cases, you’ll need to pull in the entire AWS SDK, maybe authenticate it, make your requests to the secret manager, cache values for some sort of lifetime before refreshing, etc.

To use it “less properly,” you can inject the values in environment variables, but then there’s no way to pick up changes, and rotating secrets becomes a _project_.

Or spin this up, and that’s all handled. It’s so simple you can even use it from your shell scripts.

Lastly, there are several open-source secret management tools available in the Cloud, like Infisical, an open-source secret management platform that developers can use to centralize their application configuration and secrets like API keys and database credentials, or Conjur, which provides an open-source interface to securely authenticate, control, and audit non-human access across tools, applications, containers, and cloud environments via robust secrets management. In addition to these, there are proprietary secret management solutions like HashiCorp Vault, Azure Key Vault, Google Secret Manager, and AWS Secrets Manager.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.