MongoDB Inc (MDB) Trading 3.52% Higher on Oct 30 – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB Inc (MDB, Financial) surged 3.52% in mid-day trading on Oct 30. The stock reached an intraday high of $288.32, before settling at $284.90, up from its previous close of $275.21. This places MDB 44.10% below its 52-week high of $509.62 and 33.92% above its 52-week low of $212.74. Trading volume was 543,828 shares, 40.9% of the average daily volume of 1,330,510.

Wall Street Analysts Forecast

1851656382181437440.png

Based on the one-year price targets offered by 28 analysts, the average target price for MongoDB Inc (MDB, Financial) is $336.13 with a high estimate of $520.00 and a low estimate of $215.00. The average target implies an upside of 17.98% from the current price of $284.90. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 32 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 1.9, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $528.80, suggesting a upside of 85.61% from the current price of $284.8975. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Strategic Thinking for Staff+ Engineers

MMS Founder
MMS Shweta Saraf

Article originally posted on InfoQ. Visit InfoQ

Transcript

Saraf: I lead the platform networking org at Netflix. What I’m going to talk to you about is based off 17-plus years of experience building platforms and products, and then building teams that build platform and products in different areas of cloud infrastructure and networking. I’ve also had an opportunity to work in different scale and sizes of the companies: hyperscalers, pre-IPO startups, post-IPO startups, and then big enterprises. I’m deriving my experience from all of those places that you see on the right. Really, my mission is to create the best environment where people can do their best work. That’s what I thrive by.

Why Strategic Thinking?

I’m going to let you read that Dilbert comic. This one always gets me, like whenever you think of strategy, strategic planning, strategic thinking, this is how your experience comes across. It’s something hazy. It’s something hallucination, but it’s supposed to be really useful. It’s supposed to be really important for you, for your organization, for your teams. Then, all of this starts looking really hard and something you don’t really want to do. Why is this important? Strategic thinking is all about building that mindset where you can optimize for long-term success of your organization. How do you do that? By adapting to the situation, by innovating and building this muscle continuously.

Let’s look at some of these examples. Kodak, the first company to create a camera ever, and they had a strategic mishap of not really thinking that digital photography is taking off, betting too heavily on the film. As a result, their competitors, Canon and others caught up, and they were not able to, and they went bankrupt. We don’t want another Kodak at our hands. Another one that strikes close to home, Blockbuster. Blockbuster, how many of you have rented DVDs from Blockbuster? They put emphasis heavily on the physical model of renting media. They completely overlooked the online streaming business and the DVD rental business, so much so that in 2000 they had an opportunity to acquire Netflix, and they declined.

Then, the rest is history, they went bankrupt. Now hopefully you’re excited about why strategic thinking matters. I want to build this up a bit, because as engineers, it’s easy for us to do critical thinking. We are good at analyzing data. We work by logic. We understand what the data is telling us or where the problems are. Also, when we are trying to solve big, hard problems, we are very creative. We get into the creative thinking flow, where we can think out of the box. We can put two and two together, connect the dots and come up with something creative.

Strategic thinking is a muscle which you need to be intentional about, which you need to build up on your critical thinking and creative thinking. It’s much bigger than you as an individual, and it’s really about the big picture. That’s why I want to talk about this, because I feel like some people are really good at it, and they practice it, but there are a lot of us who do not practice this by chance, and we need to really make it intentional to build this strategic muscle.

Why does it really matter, though? It’s great for the organization, but what does it really mean. If you’re doing this right, it means that durability of the decisions that you’re making today are going to hold the test of the time. Whether it’s a technical decision you’re making for the team, something you’re making for your organization or company at large, your credibility is built by how well can you exercise judgment based on your experience.

Based on the mistakes that you’re making and the mistakes others are making, how well can you pattern match? Then, this leads, in turn, to building credentials for yourself, where you become a go-to person or SME, for something that you’re driving. In turn, that creates a good reputation for your organization, where your organization is innovating, making the right bets. Then, it’s win-win-win. Who doesn’t like that? At individual level, this is really what it is all about, like, how can you build good judgment and how can you do that in a scalable fashion?

Outline

In order to uncover the mystery around this, I have put together some topics which will dive into the strategic thinking framework. It will talk about, realistically, what does that mean? What are some of the humps that we have to deal with when we talk about strategy? Then, real-world examples. Because it’s ok for me to stand here and tell you all about strategy, but it’s no good if you cannot take it back and apply to your own context, to your own team, to yourself. Lastly, I want to talk a bit about culture. For those of you who play any kind of leadership role, what role can you play in order to foster strategic thinking and strategic thinkers in your organization?

Good and Poor Strategies

Any good strategy talk is incomplete without reference to this book, “Good Strategy Bad Strategy”. It’s a dense read, but it’s a very good book. How many people have read this or managed to read the whole thing? What Rumelt really covers is the kernel of a good strategy. It reminds me of one of my favorite Netflix shows, The Queen’s Gambit, where every single episode, every single scene, has some amount of strategy built into it. What Rumelt is really saying is, kernel of a good strategy is made up of three parts. This is important, because many times we think that there is strategy and we know what we are doing, but it is too late until we discover that this is not the right thing for our business.

This is not the right thing for our team, and it’s very expensive to turn back. A makeup of a good strategy, the kernel of it is diagnosis. It’s understanding why and what problems are we solving. Who are we solving these problems for? That requires a lot of research. Once you do that, you need to invest time in figuring out what’s your guiding policy. This is all about, what are my principles, what are my tenets? Hopefully, this is something which is not fungible, it doesn’t keep changing if you are in a different era and trying to solve a different problem. Then you have to supplement it by very cohesive actions, because a strategy without actions is just something that lives on the paper, and it’s no good.

Now that we know what a good, well-balanced strategy looks like, let’s look at what are examples of some poor strategies. Many of you might have experienced this, and I’m going to give you some examples here to internalize this. We saw what a good strategy looks like, but more often than not, we end up dealing with a poor strategy, whether it is something that your organizational leaders have written, or something as a tech lead you are responsible for writing. The first one is where you optimize heavily on the how, and you start building towards it with the what. You really don’t care about the why, or you rush through it. When you do that, the strategy may end up looking too prescriptive. It can be very unmotivating. Then, it can become a to-do list. It’s not really a strategy.

One example that comes to my mind is, in one of the companies I was working for, we were trying to design a return-to-work policy. People started hyper-gravitating on, how should we do it? What is the experience of people after two, three years of COVID, coming back? How do we design an experience where we have flex desk, we have food, we have events in the office? Then, what do we start doing to track attendance and things like that? People failed to understand during that time, why should we do it? Why is it important? Why do people care about working remote, or why do they care about working hybrid?

When you don’t think about that, you end up solving for the wrong thing. Free food and a nice desk will only bring so many people back in the office. Failing to dig into the why, or the different personas, or there were some personas who worked in a lab, so they didn’t really have a choice. Even if you did social events or something, they really didn’t have time to go participate because they were shift workers. That was an example of a poor strategy, because it ended up being unmotivating. It ended up being very top-down, and just became a to-do list.

The next one, where people started off strong and they think about the why. Great job. You understood the problem statement. Then, you also spend time on solving the problem and thinking about how. What you fail is how you apply it, how you actually execute on it. Another example here, and I think this is something you all can relate with, like many companies identify developer productivity as a problem that needs solving. How many of you relate to that? You dig into it. You look at all the metrics, DORA metrics, SPACE, tons of tools out there, which gives you all that data. Then you start instrumenting your code, you start surveying your engineers, and you do all these developer experience surveys, and you get tons of data.

You determine how you’re going to solve this problem. What I often see missing is, how do you apply it in the context of your company? This is not an area where you can buy something off the shelf and just solve the problem with a magic wand. The what really matters here, because you need to understand what the tools can do. Most importantly, how you apply it to your context. Are you a growing startup? Are you a stable enterprise? Are you dealing with competition? It’s no one size fits all. When you miss the point on the what, the strategy can become too high level. It sounds nice and it reads great, but then nobody can really tell you, how has the needle moved on your CI/CD deployment story in the last two years? That’s example of a poor strategy.

The third one is where you did a really great job on why, and you also went ahead and started executing on this. This can become too tactical or reactive, and something you probably all experience. An example of this is, one of my teams went ahead and determined that we have a tech debt problem, and they dug into it because they were so close to the problem space. They knew why they had to solve this. They rushed into solving the problems in terms of the low-hanging fruits and fixing bugs here and there, doing a swarm, doing a hack day around tech debt. Yes, they got some wins, but they completely missed out the step on, what are our architectural principles? Why are we doing this? How will this stand the test of time if we have a new business use case?

Fast forward, there was a new business use case. When that new business use case came through, all the efforts that were put into that tech debt effort went to waste. It’s really important, again, to think about what a well-balanced strategy looks like, and how you spend time in building one, whether it’s a technical strategy or writing as a staff or a staff-plus engineer, or you’re contributing to a broader organizational strategic bet along with your product people and your leaders.

Strategic Thinking Framework

How do we do it? This is the cheat sheet, or how I approach it, and how I have done it, with partnering with my tech leads who work with me on a broad problem set. This is putting that in practice. First step is diagnostics and insights. Start with, who are your customers? There’s not one customer, generally. There are different personas. Like in my case, there are data engineers, there are platform providers, there are product engineers, and there are end customers who are actually paying us for the Netflix subscription. Understanding those different personas. Then understanding, what are the hot spots, what are the challenges? This requires a lot of diligence in terms of talking to your customers, having a very tight feedback loop.

Literally, I did 50 interviews with my customers before I wrote down the strategy for my org. I did have my tech lead on all of those interviews, because they were able to grasp the pain points or the issues that the engineers were facing, at the same time what we were trying to solve as an org.

Once you do that, it’s all about coming up with these diagnostics and insights where your customer may say, I want something fast. They may not say, I want a Ferrari. I’m not saying you have to go build a Ferrari, but your customers don’t always know what they want. You as a leader of the organization or as a staff engineer, it’s on you to think about all the data and derive what are the insights that come out of it? Great. You did that. Now you also go talk to your industry peers. Of course, don’t share your IP. This is the step that people miss. People don’t know where the industry is headed, and they are too much into their own silo, and they lose sight of where we are going. Sometimes it warrants for a build versus buy analysis.

Before you make a strategic bet, think about what your company needs. Are you in a build mode, or is there a solution that you can buy off-the-shelf which will save your life? Once you do that, then it’s all about, what are our guiding principles? What are the pillars of strategy? What is the long-term vision? This is, again, unique to your situation, so you need to sit down and really think about it. This is not complicated. There are probably two or three tenets that come out which are the guiding principle of, how are we going to sustain this strategy over 12 to 18 months? Then, what are some of the modes or competitive advantages that are unique to us, to our team, or to our company that we are going to build on?

You have something written down at this point. Now the next step of challenge comes in, where it’s really about execution. Your strategy is as good as how you execute on it. This is the hard part where you might think, the TPM or the engineering leader might do all of this work of creating a roadmap, doing risk and mitigation, we’re going to talk about risk a lot more, or resources and funding. You have a voice in this. You’re closer to the problem. Your inputs can improve the quality of roadmap. Your inputs can improve how we do risk mitigation across the business. Do not think this is somebody else’s job. Even though you are not the one driving it, you can play a very significant role in this, especially if you are trying to operate at a staff or a staff-plus engineer level.

Finally, there can be more than one winning strategy. How do you know if it worked or not? That’s where the metrics and KPIs and goals come in. You need to define upfront, what are some of the leading indicators, what are some of the lagging indicators by which you will go back every six months and measure, is this still the right strategic bets? Then, don’t be afraid to say no or pivot when you see the data says otherwise. This is how I approach any strategic work I do. Not everything requires so much rigor. Some of this can be done quickly, but for important and vital decisions, this kind of rigor helps make you do the right thing in the long term.

Balancing Risk and Innovation

Now we look like we are equipped with how to think about strategy. It reminds me of these pink jumpsuit guys who are guardians of the rules of the game in Squid Games. We are now ready to talk about making it real. Next, I’m going to delve into how to manage risk and innovation. Because again, as engineers, we love to innovate. That’s what keeps us going. We like hard problems. We like to think of them differently. Again, the part I was talking about, you are in a unique position to really help balance out the risk and how to make innovation more effective. I think Queen Charlotte, in Bridgerton, is a great example of doing risk mitigation every single season and trying to find a diamond in the ton. Risk and innovation. You need to understand, what does your organization value the most? Don’t get me wrong, it’s not one or the other.

Everybody has a culture memo. Everybody has a set of tenets they go by, but this is the part of unsaid rules. This is something that every new hire will learn by the first week of their onboarding on a Friday, but not something that is written out loud and clear. In my experience, there are different kinds of organizations. Ones which care about execution, like results above everything, top line, bottom line. Like how you execute matters, and that’s the only thing that matters, above everything else. There are others who care about data-driven decision making. This is the leading principle that really drives them.

They want to be very data driven. They care about customer sentiment. They keep adapting. I’m not saying they just do what their customers tell them, but they have a great pulse and obsession about how customers think, and that really helps them propel. There are others who really care about storytelling and relationships. What does this really mean? It’s not like they don’t care about other things, but if you do those other things, if you’re good at executing, but if you fail to influence, if you fail to tell a story about what ideas you have, what you’re really trying to do.

If you fail to build trust and relationships, you may not succeed in that environment, because it’s not enough for you to be smart and knowing it all. You also need to know how to convey your ideas and influence people. When you talk about innovation, there are companies who really pride themselves on experimentation, staying ahead of the curve. You can look at this by how many of them have an R&D department, how much funding do they put into that? Then, what’s their role in the open-source community, and how much they contribute towards it. If you have worked in multiple companies, I’m pretty sure you may start forming these connections as to which company cares about what the most.

Once you figure that out, as a staff-plus engineer, here are some of the tools in your toolkit that you can be doing to start mitigating risk. Again, rapid prototyping. This is way better than months or weeks of meetings trying to make somebody agree on something, versus spending two days on rapid prototyping and letting the results enhance the learning and arriving at a conclusion. We talked about data-driven decisions. Now you understood what drives innovation in your org, but you should also understand what’s the risk appetite. If you want to go ahead with big, hairy ideas, or you are not afraid to bring up spicy topics, but if your organization doesn’t have that risk appetite, you are doing yourself a disservice.

I’m not saying you should hold back, but be pragmatic as to what your organization’s risk appetite is, and try to see how you can spend your energy in the best way. There are ideathons, hackathons. As staff-plus engineers, you can lead by example, and you can really champion those things. One other thing that I like is engineering excellence. It’s really on you to hold the bar and set an example of what level of engineering excellence does your org really thrive for?

With that in mind, I’m going to spend a little bit of time on this. I’m pretty sure this is a favorite topic for many of you, known unknowns and unknown unknowns. I want to extend that framework a bit, because, to me, it’s really two axes. There’s knowledge and there is awareness. Let’s start with the case where you know both: you have the knowledge and you have the awareness. Those are really facts. Those are your strengths. Those are the things that you leverage and build upon, in any risk innovation management situation. Then let’s talk about known unknowns. This is where you really do not know how to tackle the unknown, but you know that there are some issues upfront.

These are assumptions or hypotheses that you’re making, but you need data to validate it. You can do a bunch of things like rapid prototyping or lookaheads or pre-mortems, which can help you validate your assumptions, one way or the other. The third one, which we don’t really talk about a lot, many of us suffer from biases, and subconscious, unconscious biases. Where you do have the knowledge and you inherently believe in something that’s part of your belief system, but you lack the awareness that this is what is driving it. In this situation, especially for staff-plus engineers, it can get lonely up there. It’s important to create a peer group that you trust and get feedback from them. It’s ok for you to be wrong sometimes. Be willing to do that.

Then, finally, unknown unknowns. This is like Wild Wild West. This is where all the surprises happen. At Netflix, we do few things like chaos engineering, where we inject chaos into the system, and we also invest a lot in innovation to stay ahead of these things, versus have these surprises catch us.

Putting all of this into an outcome based visual. Netflix has changed the way we watch TV, and it hasn’t been by accident. It started out as a DVD company back in 1997. That factory is now closed. I had the opportunity to go tour it, and it was exciting. It had all the robotics and the number of DVDs it shipped. The point of this slide is, it has been long-term strategic thinking and strategic bets that have allowed Netflix to pivot and stay ahead of the curve. It hasn’t been one thing or the other, but like continuous action in that direction that has led to the success.

Things like introducing the subscription model, or even starting to create original Netflix content. Then, expanding globally to now we are into live streaming, cloud gaming, and ads. We just keep on doing that. These are all the strategic bets. We used a very data-driven method to see how these things pan out.

Real-World Examples

Enough of what I think. Are you ready to dive into the deep end and see what some of your industry peers think? Next, I’m going to cover a few real-world examples. Hopefully, this is where you can take something which you can start directly applying into your role, into your company, into your organization. Over my career, I’ve worked with 100-plus staff-plus engineers, and thousands of engineers in general, who I’ve hired, mentored, partnered with. I went and talked to some of those people again. Approximately, went and spoke to 50-plus staff-plus engineers who are actually practitioners of what I was just talking about in terms of strategic framework.

How do they apply it? I intentionally chose companies of all variations, like big companies, hyperscalers, cloud providers, startups at different stages of fundings who have different challenges. Then companies who are established, brands who just went IPO. Then, finally, enterprises who have been thriving for 3-plus decades. Will Larson’s book, “Staff Engineer,” talks about archetypes. When I spoke to all these people, they also fall into different categories of being deep domain experts, being generalist, cross-functional ICs, distinguished engineers who are having industry-wide impact.

Then, SREs and security leaders who are also advising to the C levels, like CISOs. It was very interesting to see the diversity in the impact and in the experience. Staff-plus engineers come in all flavors, like you probably already know. They basically look like this squad, the squad from 3 Body Problem, each of them having a superpower, which they were really exercising on their day-to-day jobs.

What I did was collected this data and did some pattern matching myself, and picked out some of the interesting tips and tricks and anecdotes of what I learned from these interviews. The first one I want to talk about is a distinguished engineer. They are building planet scale distributed systems. Their work is highly impactful, not only for their organization, but for their industry. The first thing they said to me was, people should not DDoS themselves. It’s very easy for you to get overwhelmed by, I want to solve all these problems, I have all these skill sets, and everything is a now thing.

You really have to pick which decisions are important. Once you pick which problems you are solving, be comfortable making hard decisions. Talking to them, there were a bunch of aha moments for them as they went through the strategic journey themselves. Their first aha moment was, they felt engineers are good at spotting BS, because they are so close to the problem. This is a superpower. Because when you’re thinking strategically, maybe the leaders are high up there, maybe, yes, they were engineers at one point in time, but you are the one who really knows what will work, what will not work. Then, the other aha moment for them was, fine, I’m operating as a tech lead, or I’m doing everything for engineering, or I’m working with product, working with design. It doesn’t end there.

If you really want to be accomplished in doing what you’re good at, at top of your skill set, they said, talk to everyone in the company who makes companies successful, which means, talk to legal, talk to finance, talk to compliance. These are the functions we don’t normally talk to, but they are the ones that can give you business context and help you make better strategic bets. The last one was, teach yourself how to read a P&L. I think this was very astute, because many of us don’t do that, including myself. I had to teach myself how to do this. The moment I did that, game changing, because then I could talk about aligning what I’m talking about to a business problem, and how it will move the needle for the business.

A couple of nuggets of advice. You guys must have heard this famous quote, that there’s no compression algorithm for experience. This person believes that’s not true. You can pay people money and hire for experience, but what you cannot hire for is trust. You have to go through the baking process, the hardening process of building trust, especially if you want to be influential in your role. As I was saying, there can be more than one winning strategies. As engineers, it’s important to remain builders and not become Slack heroes. Sometimes when you get into these strategic roles, it takes away time from you actually building or creating things. It’s important not to lose touch with that. The next example is a principal engineer who’s leading org-wide projects, which influences 1000-plus engineers.

For them, the aha moment was that earlier in their career, they spent a lot of time honing in on the technical solutions. While it seems like this is obvious, it’s still a reminder that it’s important to build relationships, as important as it is to build software. For them, it felt like they were not able to get the same level of impact, or when they approached strategy or projects with the intent of making progress, people thought that they had the wrong intentions. They thought they are trying to step over other people’s toes, or they are trying to steamroll them because they hadn’t spent the time building relationships. That’s when they felt like they cannot work in a silo. They cannot work in a vacuum. That really changed the way they started impacting a larger audience, larger team of engineers, versus a small project that they were leading.

The third one is for the SRE folks. This person went to multiple startups, and at one point in time, we all know that SRE teams are generally very tiny, and they serve a large set of engineering teams. When that happens, you really need to think of not just the technical aspects of strategy or the skill sets, but also the people aspect. How do you start multiplying? How do you strategically use your time and influence not just what you do for the business? For them, the key thing was that they cannot do it all. They started asking this question as to, if they have a task ahead of us, is this something only they can do? If the answer was no, they would delegate. They would build people up. They would bring others up.

If the answer was yes, then they would go focus on that. The other thing is, not everybody has an opportunity to do this, but if you do, then do encourage you to do the IC manager career pendulum swing. It gives you a lot of skill sets in terms of how to approach problems and build empathy for leadership. I’m not saying, just throw off your IC career and go do that, but it is something which is valuable if you ever did that.

This one is a depth engineer. It reminded me of Mitchells and Machines. They thought of it as expanding from interfacing with the machine or understanding what the inputs and outputs are, to taking a large variety of inputs, which are like organizational goals, business goals, long-term planning. This is someone who spends a lot of focused time and work solving hard problems. Even for them, they have to think strategically. Their advice was, start understanding what your org really wants. What are the different projects that you can think of? Most importantly, observe the people around you.

Learn from people who are already doing this. Because, again, this is not perfect science. This is not something they teach you in schools. You have to learn on the job. No better way than trying to find some role models. The other piece here also was, think of engineering time as your real currency. This is where you generate most of the value. If you’re a tech lead, if you’re a staff-plus engineer, think how you spend your time. If you’re always spending time in dependency management and doing rough migrations, and answering support questions, then you’re probably not doing it right. How do you start pivoting your time on doing things which really move the needle?

Then, use your team to work through these problems. Writing skills and communication skills are very important, so pitch your ideas to different audiences. You need to learn how to pitch it to a group of engineers versus how to pitch it to executive leadership. This is something that you need to be intentional about. You can also sign up for a technical writing course, or you can use ChatGPT to make your stuff more profound, like we learned.

Influencing Organizational Culture

The last thing I want to talk about is this character Aang, in Avatar: The Final Airbender, when they realize what is their superpower and how do they channel all the Chi, and then start influencing the world around them. To me, this is the other side effect of just your presence as a staff-plus engineer and the actions you take, or how you show up, whether you think of it or not, you’re influencing the culture around you. Make it more intentional and think about, how can you lead by example? How can you multiply others?

Also, partner with your leadership. This is a thing where I invite my senior engineering tech leads to sit at the table where we discuss promotions, where we discuss talent conversations. It’s not a normal thing, but it is something I encourage people to do. Because, to me, it’s not just like having a person who’s a technical talent on my team, their context can help really grow the organization.

Role of Leadership

The last thing, if you have seen this documentary, “The Last Dance”, about Michael Jordan. I highly encourage you to see this. It’s very motivational. Then, in terms of leadership, everybody has a role to play. What I want to give you here is, as a leader, what can you do to empower the staff-plus engineers? It’s your job to give them the right business context and technical context. I was talking about this. Do not just think of them as technical talent. Really invite them, get their inputs in all aspects of your own. I know this is a hard one to do. Not everybody thinks about it. This is how I like to do it.

Then, giving them a seat at the table, at promos, and talking to exec leadership, and protecting their time. Finally, this is another one where, as a leader, it’s your job to pressure test strategies. By this, what I mean is, if your technical leadership is pitching all these strategies to you, it’s on you to figure out if this is the strategy that will deliver on the org goals. How we do this is by having some meeting with the technical leads in the organization and leaders, where we work through all the aspects of strategy that I talked about, and we pressure test it. We think about, what will happen if we go build route or if we go buy route?

Then, that’s an important feedback loop before someone takes the strategy and starts executing. Finally, help with risk management and help with unblocking funding and staffing. If you do all of this, then obviously your leaders will feel empowered to think strategically.

Recap

Understand the difference between strategic thinking and how it builds upon creative thinking and critical thinking. How it’s a different muscle, and why you need to invest in it. Thus, we covered the strategic thinking framework. It’s a pretty simple thing that you can apply to the problems that you’re trying to solve. Then, important as a staff-plus engineer, to play a critical role in how you balance innovation and risk. Understand what drives innovation. Apply some examples that we talked about. You are influencing the culture. I want to encourage all of you to grow and foster the strategic thinkers that are around you, or if you are one yourself, then you can apply some of these examples to your context.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Lambda Introduces a Visual Studio Code-Based Editor with Advanced Features and AI Integration

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS Lambda has launched a new code editing experience within its console, featuring an integration based on the Visual Studio Code Open Source (Code-OSS) editor.

The Code-OSS integration delivers a coding environment like a local setup, with the ability to install preferred extensions and customize settings. Developers can now view function packages up to 50 MB directly within the console, addressing a limitation of previous Lambda editors. Although a 3 MB per file size limit remains, this change allows users to better handle functions with extensive dependencies.

Furthermore, the editor offers a split-screen layout, letting users view test events, function code, and outputs simultaneously. With real-time CloudWatch Logs Live Tail integration, developers can track logs instantly as code executes, allowing immediate troubleshooting and faster iteration.

A respondent in a Reddit thread commented:

It can be very helpful for quick debugging/testing; thanks for the improvement!

In addition, AWS has focused on making the new editor more accessible by including screen reader support, high-contrast themes, and keyboard-only navigation, creating a more inclusive experience for all developers.

(Source: AWS Compute blog post)

Julian Wood, a Serverless developer advocate at AWS, tweeted:

Lambda’s console is all new and shiny! Now, using the VS Code OSS editor. So, it feels similar to your IDE. Now, view larger package sizes! Test invokes are much simpler; view your results side-by-side with your code for quick iteration.

And finally, the console now features Amazon Q Developer, an AI-driven coding assistant that provides real-time suggestions, code completions, and troubleshooting insights. This integration enables Lambda developers to build, understand, and debug functions more efficiently, streamlining the development process by reducing context-switching. Amazon Q’s contextual suggestions benefit repetitive or complex tasks, such as configuring permissions or handling event-specific data structures.

In an AWS DevOps and Developer Productivity blog post, Brian Breach writes:

Q Developer can provide you with code recommendations in real-time. As you write code, Q Developer automatically generates suggestions based on your existing code and comments. 

Yet, Alan Blockley, a self-proclaimed AWS Evangelist, commented in a LinkedIn post by Luc van Donkersgoed:

I’m conflicted by this release. While I like that it modernizes the creation of Lambda functions and introduces the potential of AI-driven development using Amazon Q and the like, I’ve never liked coding in the console as it discourages other best practices like IaC and change control.

And Marcin Kazula commented:

It is great for experimentation, quick and dirty fixes, validating deployed code, and more. The fact that it deals with large lambda packages is an extra bonus.

Lastly, the new code editor is available in all AWS regions where Lambda is available.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Jump Into the Demoscene; It Is Where Logic, Creativity, and Artistic Expression Merge

MMS Founder
MMS Espen Sande Larsen

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • The demoscene blends creativity and coding, offering a space where anyone can express themselves through technology.
  • It challenges you to create real-time digital art using code, often within strict constraints.
  • My experience in the scene has shaped my ability to think outside the box and solve complex problems.
  • The demoscene is a community where collaboration and learning are key, welcoming people from all skill levels.
  • It is the perfect playground if you’re passionate about coding and creativity.

The demoscene is a vibrant, creative subculture where you can use mathematics, algorithms, creativity, and a wee bit of chaos to express yourself. In this article, I want to inspire you to grab the opportunity to get your voice and vision on the table and show you that the demoscene is not only for mathematical wizards.

The demoscene, in essence

The demoscene is a subculture of computer art enthusiasts spanning the globe. These creative individuals come together at gatherings where they create and show off their demos. A demo is self-executing software that displays graphics and music and is sort of a cerebral art piece. It began in the software piracy culture in the home computer revolution in the 1980s. It started with people who cracked games and wanted to flaunt their accomplishments by adding a small intro before the games or software loaded. This evolved into a culture that celebrates creativity, mathematics, programming, music, and freedom of artistic expression.

The anatomy of a demo

A demo is a multimedia digital art showcasing your abilities as a creative programmer and musician and the capabilities of your chosen platform. It is a piece built within certain constraints and has to be a piece of executable software at its core.

There are some great examples of great demos. “State of the Art” and “9 Fingers” by the group Spaceballs were two groundbreaking demos on the Amiga platform. The demo “Second Reality” by the Future Crew was a mind-blowing, impressive demo that put the IBM PC platform on the demoscene map. “Mojo” by Bonzai & Pretzel Logic is an utterly mind-blowing contemporary demo on the Commodore 64. “Elevated” by Rgba & TBC is truly impressive because it marked a point where traditional CPU-based demos were left in the dust for the new age of GPUs, and the fact that the entire thing is done in only 4096 bytes is almost unbelievable. Countless others could be mentioned.

  • State of the Art
  • 9 Fingers
  • Second Reality
  • Mojo
  • Elevated

One point I must emphasize is that compared to 3D animations made in 3D applications such as Blender or Maya, the graphics might not seem so impressive. However, you must remember that everything is done with code and runs in real-time – not pre-rendered.

Why demos? The motivation behind digital algorithmic art

The motivations could be many. I can speak from my experience of being on the scene as part of a crew. It is an incredible experience being part of a team and using your brain and creativity for fun but competitively.

It is like a team sport; we only use our minds rather than our bodies. You assemble a group with different skills in a team playing a game. Then, you train the team to work together, play off each other’s strengths, and work toward a common goal. In soccer, you have a keeper, the defense line, the midfielders, and the strikers. Each position has responsibilities, and through training and tactics, you work together to win a game.

This is the same in a demo crew. Some are great at graphics, some are great at music, some have a deep mathematical understanding, some are awesome at programming and optimization, and maybe one has a creative vision. You play off everyone’s strength and come together as a team to create a stunning art piece that will resonate with an audience.

I was part of a couple of crews in my formative years, and the sense of comradery, working together, learning together, and being serious about the next production to showcase at the next big demo party. It is very much the same. The primary motivation is to learn, improve, hone your skills, and go toe to toe against some of the world’s greatest minds and most peculiar characters. The culture is also great, inclusive, and open.

The demo party concept is where all these enthusiasts come together, travel from every corner of the world with their gear, set up their stations, and celebrate this craft over intense, sleepless, fun, and engaging days and nights. It is like a music festival, a sports tournament, and a 24/7 art exhibition and workshop melted together into one.

There is nothing like having your art piece displayed on a concert-sized screen, with your music roaring on a gigantic sound system in front of a cheering crowd of 3000 enthusiasts who are there for the same reason. When your piece is well received, you get to tell yourself, “I did that with my brain!” It is a true rush beyond anything else I have experienced and can compare it to.

Overcoming challenges in demo creation: Vision, math, and constraints

There are three main challenges when making a demo. The first one is the idea or the vision. What do I want to show on screen? What story am I telling? How do I want the audience to feel? That is the creative aspect.

Then, once you have that, you must figure out how to implement it. How on earth can I make a black hole floating through the galaxy? What is the math behind how it works? And how can I approximate the same results when the ideal math is too complicated to compute in real time?

The third is once you have conquered the first two challenges: Oh rats, I have to make this fit into 64k or whatever constraint your discipline requires.

You deal with these challenges in a lot of ways. Sometimes, you get inspired by the work of others: “Wow, that spinning dodecahedron was cool; I wonder how they did that?” Then, you figure it out and start getting ideas for doing it differently. “What if I combined that with some awesome ray marching terrain and glowing infinity stone shading?” Then, you evolve that into a story or a theme.

I often remember some imagery from a dream. Then, I start to play around with ideas to visualize them. And then I build from that. But a lot of times, I am just doodling with some weird algorithms, and suddenly, something cool is on the screen, and the creative juices start to flow.

The second challenge is what I find the most fun. More often than not, I have to go on a journey of discovery to figure out what techniques, mathematics, algorithms, and tools I need to use. Sometimes, I see how others have managed to do similar concepts and borrow from them, or I ask fellow demosceners how they would approach it. It is quite an open community, and people usually love to share. One must remember that much of this is based on mathematical concepts, often borrowed from fields other than graphics. Still, these concepts are openly documented in various books and publications. It is not usually protected intellectual property. But you may stumble upon some algorithms protected by copyright and subject to licensing, such as Simplex Noise, invented by Ken Perlin. However, copying techniques from other productions are usually very common in the scene.

Remember that the culture evolved from software piracy, so “stealing” is common. Like Picasso said: “Good artists borrow, great artists steal”. It is not okay to take someone else’s work and turn it in as your own; you must make it your own or use the technique to tell your story. This is why there are cliches in many demos, like plasma effects, pseudo-3D shapes, interference patterns, etc.

The last challenge is the most challenging to take on. Because at that point, you have reached your goal. Everything looks perfect; you have made your vision; the only problem is that it needs to be reduced to fit within the constraints of the competition you are entering. So here, you must figure out how to shave off instructions, compress data, and commit programming crimes against the best practice coding police. This is when methodologies like design patterns, clean code, and responsible craftsmanship go out the window, and things like the game Quake’s “Fast Inverse Square Root Algorithm” come into play. There are so many aspects of this; everything from rolling out loops, self-modifying code, evil floating point hacks, algorithmic approximations, and such comes into play.

Here is an example. A standard approach to compute the inverse of a square root in C might be something like this:

#include  float InverseSqrt(float number) { return 1.0F / sqrtf(number); }

Easy to read, easy to understand. But you are including the math.h library. This will impact the size of your binary. Also, the sqrt function might be too slow for your demo to run efficiently.

The fast method found in the Quake III engine looks like this:

float Q_rsqrt(float number)
{
  long i;
  float x2, y;
  const float threehalfs = 1.5F;

  x2 = number * 0.5F;
  y  = number;
  i  = * ( long * ) &y;                       // evil floating point bit level hacking
  i  = 0x5f3759df - ( i >> 1 );               // wtf?
  y  = * ( float * ) &i;
  y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
  // y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration, this can be removed

  return y;
}

This is quite hard to understand at first glance. You take a float, cast it into a long, do some weird bit smashing with a strange constant, then stuff it back into a float as if it is perfectly ok to do so. However, this does an undefined operation twice, and then it does a Newton iteration to approximate a result. Originally, it was done twice, but the second iteration was commented out because one approximation was sufficient.

This algorithm is a testament to the sort of wizardry you learn in the demoscene. You learn to cheat and abuse your platforms because you get so familiar with them that even undefined operations can be manipulated into doing your bidding. You would never program anything like this on a commercial software product or check in code like this on a typical project unless you have no other option to reach your goal. But imagine an enterprise software team doing a code review on this after a pull request.

Let’s create a demo effect together; let’s get our hands dirty

I will show you one of my favorite effects, known as “fire”. What I do is, through a straightforward algorithm, create something that looks like fire burning violently on screen. It is quite impressive in my view.

There are three prerequisites. I will use JavaScript because it is easy, everyone has a browser, and the tooling is minimal. Next, I am using my own graphics library, which I have built. It is called DrCiRCUiTs Canvas Library; it is free, open-source, and has no license. You can find it on NPM here.

It is not the most elegant library, but it hides a lot of the necessary boilerplate and gives you some nice, simple concepts to get started.

Like most of these effects, I use the canvas API but do not draw boxes. Instead, I use bitmap data because bitmap operations like this are more performant than individual drawing operations.

The steps are simple to this effect:

You must build a palette of colors, from white to yellow to orange to black. This represents the fire transitioning from burning hot to smoke. Let’s go old school and use 256 colors in this palette. You can either hard code it – this is what I would do back in the day – but these days, with modern RGB displays, you might just as quickly generate it. Also, my library has a nice little object for handling colors.

Next, you need to create a drawing buffer on the screen, loop over the pixels, and update the effect for each iteration.

You will also load a logo as a mask buffer, where you will randomize the pixels of the logo’s edges. This will give a burning silhouette in the middle of the screen.

Step 1: Generate the palette.

function generatePalette() {
    for (let i = 0; i = 0 && hp  1 && hp  2 && hp  3 && hp  4 && hp  5 && hp <= 6) {
        r = c;
        b = x;
      }

      let m = l - c / 2;
      r += m;
      g += m;
      b += m;
    }
    return dcl.color(
      Math.round(r * 255),
      Math.round(g * 255),
      Math.round(b * 255)
    ); // Convert to RGB
  }

Step 2: Boilerplate and setting up the buffers.

function initializeFlameArray() {
    for (let y = 0; y < scr.height; y++) {
        let row = [];
        for (let x = 0; x < scr.width; x++) {
            row.push(y === scr.height - 1 ? dcl.randomi(0, 255) : 0); // Randomize the last row
        }
        flame.push(row);
    }
}

function setup() {
    scr = dcl.setupScreen(640, 480); // Set up a screen with 640x480 pixels
    scr.setBgColor("black"); // Set background color to black
    document.body.style.backgroundColor = "black"; // Set page background color

    generatePalette(); // Generates a color palette for the flame
    initializeFlameArray(); // Set up the flame array
    id = new ImageData(scr.width, scr.height); // Initialize screen buffer
}

Step 3: Setting up the draw loop.

function draw(t) {
    randomizeLastRow(); // Randomize the bottom row for dynamic flame effect
    propagateFlame(); // Compute flame propagation using neighboring pixels
    renderFlame(); // Render flame pixels to the screen
    scr.ctx.putImageData(id, 0, 0); // Draw the image data onto the screen
    requestAnimationFrame(draw); // Continuously redraw the scene
}

 // Randomizes the values in the last row to simulate the flame source
  function randomizeLastRow() {
    for (let x = 0; x < flame[flame.length - 1].length; x++) {
      flame[flame.length - 1][x] = dcl.randomi(0, 255); // Random values for flame source
    }
  }

Step 4: Calculate the fire up the screen by playing “minesweeper” with the pixels and averaging them.

function propagateFlame() {
    for (let y = 0; y < flame.length - 1; y++) {
        for (let x = 0; x < flame[y].length; x++) {
            let y1 = (y + 1) % flame.length;
            let y2 = (y + 2) % flame.length;
            let x1 = (x - 1 + flame[y].length) % flame[y].length;
            let x2 = x % flame[y].length;
            let x3 = (x + 1 + flame[y].length) % flame[y].length;

            // Sum the surrounding pixels and average them for flame propagation
            let sum = (flame[y1][x1] + flame[y1][x2] + flame[y1][x3] + flame[y2][x2]) / 4.02;
            flame[y][x] = sum; // Adjust flame height
        }
    }
}

Step 5: Look up the mask buffer, and randomize edge pixels, then render the flame effect

function renderFlame() {
    for (let y = 0; y < scr.height; y++) {
        for (let x = 0; x < scr.width; x++) {
            let fy = y % flame.length;
            let fx = x % flame[fy].length;
            let i = Math.floor(flame[fy][fx]);
            let color = pallette[i]; // Fetch the color from the palette

            if (!color) continue; // Skip if no color found

            // Compute the pixel index in the image data buffer (4 values per pixel: r, g, b, a)
            let idx = 4 * (y * scr.width + x);
            id.data[idx] = color.r; // Set red value
            id.data[idx + 1] = color.g; // Set green value
            id.data[idx + 2] = color.b; // Set blue value
            id.data[idx + 3] = 255; // Set alpha value to fully opaque

            // Check for the logo mask and adjust the flame accordingly
            let pr = pd.data[idx];
            let pa = pd.data[idx + 3];
            if (pr  0) {
                flame[fy][fx] = dcl.randomi(0, 255); // Intensify flame in the logo region
            }
            if (pr === 255 && pa === 255) {
                id.data[idx] = id.data[idx + 1] = id.data[idx + 2] = 0; // Render logo as black
                id.data[idx + 3] = 255; // Full opacity
            }
        }
    }
}

Step 6: Load the image mask and trigger the effect.

let canvas = document.createElement("canvas");
let ctx = canvas.getContext("2d");
let img = new Image();
img.src = "path_to_your_logo_image.png"; // Replace with your logo image path or URL
img.addEventListener("load", function () {
    canvas.width = img.width;
    canvas.height = img.height;
    ctx.drawImage(img, 0, 0);
    pd = ctx.getImageData(0, 0, img.width, img.height); // Extract image data for mask
    setup(); // Initialize the flame simulation
    draw();  // Start the drawing loop
});

Step 7: Iterate and view your creation.

You can view the code for my implementation at this codepen.

Now, this is not coded to be super-optimized. I did it on purpose to keep it readable and easy to understand for newcomers. I leave it as an exercise to those of you who want to take it on as a little exercise in code golf – the least amount of instructions to achieve the same effect.

I would love to see your versions of this effect. I encourage you to experiment with the code and observe what happens when you change things. For instance, changing which pixels are included in the calculation and using other calculations besides a simple average may drastically change the effect!

How being a demoscener has shaped my career as a programmer

Creating demos has, for me, been a catalyst for acquiring the skill to learn at a rapid pace. It has also taught me the value of knowing your programming platform deeply and intimately. I am not saying that every programmer needs to do that, but it has proven highly valuable to me in my mainstream career to have the ability to learn new things fast and to have the drive to know what is going on under the hood.

It has also taught me that creativity is an underrated skill amongst developers, and it should be treasured more, especially when working with innovation. Software development is often viewed as people who just have to type in the solution for the feature described for the next iteration. It might seem simple. “We are implementing a solution that will take a file of invoices from our client, then distribute those invoices to their clients”.

Seems simple enough; it should be able to go into the sprint backlog. Then you find out once you pull that task from the backlog and think: “I’ll parse the XML, separate the documents, split them into files, and ship them out through email”. But once you get started, you find that the XML file is 12 GB, your XML parser needs 12GB of sequential memory to be allocated for the parsing to work, then writing 2 million single files to disk takes hours, and you have a 30-minute processing window.

Well, what do you do? You have to think way outside the box to achieve the task. Of course, you could try to go back and say: “This can’t be done within the parameters of the task; we need to renegotiate our client’s expectations”. But the demoscener and creative aspects of me will rarely let such a challenge go.

This is a true story, and I solved it by reducing the task into the two essential parts of the specification and reinventing the specification in between. The 12GB invoice file and the processing window were the essential bits, and I could get creative with the others. My solution achieved the outcome in less than 5 minutes. I would never have been able to think that way if I never had done any creative coding.

We sometimes forget that software development is all about creating something that doesn’t exist, and more often than not, we are asked to deliver something we do not know how to build or solve within a scope that we have to define at the highest point of ignorance – creativity will help you do that.

The future of the demoscene: Inclusivity, growth, and opportunities

My hope for the future of the demoscene is that it grows even more glorious and with even more great programmers. I yearn for a future when the diversity of the demoscene is like a mirror image of humanity.

I dream of each innovation that can catapult this art form into a new realm of wonder and creativity, and I want to get as many people involved as possible.

I hope a lot of programmers, young and old, take the leap into this wonderful scene and culture and quickly see that it is not this esoteric mystical world reserved only for a handful of brilliant people. It is truly a place for everyone, where you can challenge yourself to create and show who you are on the screen through algorithmic expression.

Lessons from the demoscene: Creativity, perseverance, and technical mastery

When I ask myself the question: So what have I learned? Today? Through my life? That is a big question. In the context of what I have learned from the demoscene, it is this: I, too, can make demos; what seemed impossible and beyond my intellectual capacity at first evolved into second nature through perseverance and not letting my inner impostor syndrome get the better of me. It helped me gain the confidence and drive to make technology my career; so far, it has turned out to be a good choice.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Generally AI – Season 2 – Episode 5: Do Robots Dream of Electric Pianos?

MMS Founder
MMS Anthony Alford Roland Meertens

Article originally posted on InfoQ. Visit InfoQ

Transcript

Roland Meertens: Okay, so for the fun fact today, I know that the city in Colombo, in Sri Lanka, they created a model of their city using the video game City Skylines. And this is meant as a digital twin. Their citizens can basically use it to understand the impact of decisions such as changing roads, adding more green space. So they created all kinds of digital assets to make the simulation look more realistic. 

They changed some parameters of the game. And the funniest change is that they modified the behavior of the traffic to reflect Sri Lankan driving habits, which includes buses may ignore lane arrows, vehicles may enter blocks junctions, vehicles may do U-turns and junctions, 10% of drivers are reckless, vehicles may park on sides of streets, and three wheelers and scooters are introduced.

Anthony Alford: And they drive on the wrong side of the road, no doubt.

Roland Meertens: I mean, 10% of drivers are reckless, I thought was the funniest change they made.

Anthony Alford: That’s optimistic, I imagine.

Roland Meertens: If it’s only 10%.

Anthony Alford: Hate to see those numbers for my hometown.

Roland Meertens: Welcome to Generally AI. This is season two, episode five, and we are going to talk about simulation. It is an InfoQ podcast, and I am Roland Meertens here with Anthony Alford.

Anthony Alford: Hello, Roland.

Roland Meertens: Shall I get started with my simulation topic for today?

Anthony Alford: Please begin.

Sampled Musical Instruments [01:50]

Roland Meertens: All right. So earlier this season, we already talked about sampling music, right?

Anthony Alford: Yes.

Roland Meertens: However, I wanted to take this a step further and talk about how you can simulate playing a real instrument using a virtual instrument.

Anthony Alford: Like Guitar Hero.

Roland Meertens: Oh man. I love Guitar Hero.

Anthony Alford: Or Rock Band or one of those games.

Roland Meertens: Did you ever play that by the way?

Anthony Alford: I tried it once and I was terrible.

Roland Meertens: I was really good at Guitar Hero, and then I started playing more real guitar and then I became terrible at Guitar Hero.

Anthony Alford: Not to interrupt your train of thought, but you’ve probably seen the video of the members of the band Rush playing one of those video games and they’re playing a Rush song.

Roland Meertens: I didn’t see that.

Anthony Alford: They’re so bad that in the video game, the crowd boos them off the stage.

Roland Meertens: Yes, I know that feeling.

Anyway, the bigger problem I have is that not everybody has space in their house for an actual instrument, or alternatively, your neighbors might not like it, like my neighbors. So instead of buying a real piano, you can buy an electric piano or you can buy MIDI keyboards. And the interesting thing is that some of the cheaper models, if you buy a super cheap keyboard, it’ll probably, instead of playing a note, it will just play like a sine wave, call it a day. But of course, real pianists want the expressiveness of a real piano.

Anthony Alford: Yes.

Roland Meertens: So option one to simulate the sound of a real piano is to record the sound of a real piano and then play that. So you can buy all kinds of kits online, like all kinds of plugins for your digital piano. They record it hitting keys on one specific piano in all kinds of different ways with all kinds of different microphones. So I’m going to let you hear an example of that.

How did you like this?

Anthony Alford: That’s very nice. It’s very convincing.

Roland Meertens: Yes. So this is the virtual piano, but then with recordings from a real piano.

Anthony Alford: Okay.

Roland Meertens: If you want this, it’ll cost you $250 for the pack alone.

Anthony Alford: So is it software? Do you use a MIDI controller, and then does this software run somewhere?

Roland Meertens: Yes, so I think this is a plugin. You can use it as a plugin, so you import it into your audio workstation, and then you play with your MIDI controller. So the MIDI controller will tell the computer what key you’re pressing with what velocity, et cetera, et cetera.

Anthony Alford: Very cool.

Roland Meertens: And I know that professional musicians, they all have their favorite virtual recorded piano they like to use, which they think sounds very good. So that’s interesting. I’ve never bought something like this.

Anthony Alford: Well, I don’t know if this is the right time in the podcast to interject it, but I’m sure you’re familiar with the Mellotron.

Roland Meertens: That’s pure sound waves, right?

Anthony Alford: Well, it’s the same idea. It’s an instrument…probably most famously you can hear it on Strawberry Fields Forever by The Beatles, but they did exactly that. They did audio recordings of real instruments. And so if there’s 50 keys on the Mellotron, it’s a keyboard instrument, there would be 50 cassette tapes.

Roland Meertens:Yes, yes. I now know what you mean. So every key has a certain cassette tape, which then starts going down and playing it, but that’s still sampling. And in this case, I guess it’s also just sampling. So you are trying to simulate.

Anthony Alford: It’s simulated in the sense of…basically just play back a recording.

Roland Meertens: Yes. But then per key, you only have one possible sound.

Anthony Alford: Exactly.

Roland Meertens: Whereas this one already took into account with what velocity you press the keys.

Anthony Alford: Oh, okay. So it has a sound per velocity almost, or could be.

Roland Meertens: Yes. So they press each key probably 25 times with different velocities, and then they record all the responses there. And this is why the software is so expensive. It’s not like someone just recorded 88 keys and called it a day. No, no, they did this 88 times 25 times, plus adding the sustained pedal or not.

Anthony Alford: It’s combinatorial explosion here.

Roland Meertens: Yes, I think it’s good value you get here.

Anthony Alford: Yes, I love it.

Roland Meertens: Well, so the other thing I was looking at is instead of sampling piano sounds, I went to the Roland store this weekend where …

Anthony Alford: Did you get a discount?

Roland Meertens: No, I had to pay like $45 for this t-shirt. No. So one thing which they do is they have a virtual drum kit. So instead of clicking on the pad and then hearing a drum, they actually have a real looking drum kit, but then when you smash it, you only hear like “puff”. So you only hear the sound over your headphones.

But I tried playing it and it still feels like a real drum kit. So the cymbals are made out of rubber. So it kind of looks weird, because it doesn’t feel exactly like a cymbal, but it sounds in your ears the same as a cymbal. And one big benefit here is that you can easily select different sounding drum kits.

Anthony Alford: Okay.

Roland Meertens: So if you want more like a metal drum kit, you select it. If you want a smooth jazzy drum kit, you can select it.

Anthony Alford: And this certainly sounds like a good solution if you have neighbors who are noise sensitive. So all they hear is pop, pop, pop, pop.

Roland Meertens: I also think that if you want to sell your house and you have a neighbor who likes to play the drums, buying this for them will probably increase your property value.

Anthony Alford: It’s worth it for that alone.

Roland Meertens: Yes. So it will set you back around $5,000, by the way.

Anthony Alford: Wow.

Roland Meertens: So I have printed it out. If you ever want to start a new hobby and not annoy your neighbors, this is the way to go.

Anthony Alford: Nice. I want to annoy my neighbors with my hobby, so I’m going to get the real thing.

Roland Meertens: In that case, I think their flagship drum kit at the Roland store was about 10k. Yes. But then you get real drums. Anyways, I will let you listen to how the drums sound.

Anthony Alford: Yes.

Not bad.

Roland Meertens: Sounds realistic, right?

Anthony Alford: Yes.

Roland Meertens: I think this is a perfect replacement for real drums, or at least it felt to me like. I would be quite happy.

Anthony Alford: Play along to that.

Simulated Musical Instruments [08:36]

Roland Meertens: No, absolutely. However, this is the simulation and not the sampling episode. So I wanted to know if people ever simulate sounds, and one of the articles I found was by Yamaha.

Anthony Alford: Of course.

Roland Meertens: Well, you say of course, but so Yamaha I think sells digital keyboards, but they also sell real pianos.

Anthony Alford: Yes. And motorcycles.

Roland Meertens: I did actually own a Yamaha scooter as a kid.

Anthony Alford: I mean, whatever. They’ll make it.

Roland Meertens: Yes. I always wonder how they went from instruments to scooters and motorcycles.

Anthony Alford: Or the other way around.

Roland Meertens: Yes. So they do have a webpage. I will link all these things in the show notes, by the way, but they do have a webpage where they talk about this, and the reason they say that they simulate sounds is different than you might think. They do this not because they want to create a virtual keyboard, but they want to know what a piano sounds like before they build it, and they want to know what design decisions impact their pianos.

Anthony Alford: Oh, interesting. That’s very clever.

Roland Meertens: Yes. So their flagship grand piano costs around $50,000. We are already going up in price here now.

Anthony Alford: How much does the motorcycle cost?

Roland Meertens: Yes, indeed. Yes. You get to choose one hobby in life. But yes, so I can imagine that if you want to build such a high quality piano, you want to know beforehand what design decisions impact the sound in which way. So yes, that’s something which I found fascinating is that they say they are using simulation to improve their real physical products.

Anthony Alford: I had never thought of that. That’s very clever.

Roland Meertens: Yes. The other thing, by the way, is that when I was looking at simulated pianos, one other reason you might want to simulate a piano is: imagine you find a super old relic piano from the past, but it’s important for historical context, right? Maybe this is the piano Mozart learned to play, or the piano George Washington imported. I don’t know.

So you don’t want to play it a lot because it might damage the piano. Or maybe it’s partially broken, maybe some keys are missing, some strings are missing and you don’t want to repair it or it’s too difficult to repair it. You can start simulating that piano and maybe simulate over a couple of keys, maybe play it once, and then record the sounds and then simulate the rest of the piano. So there’s definitely reasons that you want to simulate sound beyond “I’m a musician and I want to have a good sounding piano”.

Anthony Alford: Yes, that makes a lot of sense. My mind is just churning over the Yamaha thing. First of all, that’s one of those situations where you can make your model just as detailed as you want, and you could probably still keep going even after you think it’s done.

Roland Meertens: Yes. In that sense, I tried finding academic papers on this topic, but I didn’t find a lot here. I expected the field of simulated music to be quite large, especially because it feels relatively straightforward that you want to simulate a vibration of a string and you want to simulate how felt on the hammer impacts the string, and you want to simulate how the noise spreads through different surfaces.

So it seems like a relatively straightforward way for me as an engineer, but there were not a lot of papers of people comparing their simulation, or at least I didn’t find it.

I found one academic paper called something like Modeling a Piano, and this person had a lot of pages with matrices on what effect everything had on everything. But the paper concluded with, oh, we didn’t implement this, but you should probably use a programming language like Rust or Python or something.

Anthony Alford: I was actually speculating they probably consider it a trade secret, and it’s probably some ancient C++ code that only one old guy about to retire knows how it works.

Roland Meertens: Yes. I do know that, just continuing to talk about Roland, if you buy their amplifiers, you have a lot of artificial pedals in there. So it’s basically a software defined amplifier. And there you even have people who used to work on this technology and who retired, but who are still creating software packages for these amplifiers to make them sound like different types of amplifiers. So you can make your guitar and your amplifier sound better because there are people who are so passionate about this that they keep improving the software.

Anthony Alford: These people make the world go around sometimes.

Roland Meertens: Yes. Shout out for those people, whose names I forgot.

Also, the fun thing about this Modeling a Piano paper is that I found out that it’s basically a third year master’s student at a school of accounting who created a whole mathematical model of this piano playing, but then didn’t implement it.

Anthony Alford: Talk about somebody who just wanted a hobby. And your neighbors won’t mind if that’s your hobby.

Roland Meertens: I think with mathematics, it’s always about giving it a go. Anyway, as a consumer, you can buy simulation software for pianos, and that is that Arturia makes a piano simulator that’s called Piano V3. This one I actually own, and this one can simulate any kind of piano, has a lot of settings like what kind of backplay do you want, how old should the strings be?

And this package also costs $250. So you can either shell out $250 for recordings of a piano or for something which simulates a piano and different types of microphones and positions. Do you want to listen to a small piece?

Anthony Alford: Let’s roll that tape.

That’s very pretty.

Roland Meertens: It is quite expressive, I think. Yes. I do still have the feeling that it feels a bit more sine-wavy, and I have the feeling my MacBook speakers don’t handle it well because it seems that for a lot of the output of this, especially this program, it just seems to vibrate the keys of my MacBook in such a way that is really annoying.

Anthony Alford: You could do it with headphones, I suppose, and it would sound really good.

Roland Meertens: And then my neighbors would also be happier. So yes, that’s the outcome. There is software which can simulate how a piano works.

Anthony Alford: Very cool.

Roland Meertens: Yes.

Anthony Alford: I’ll have to check that out.

Robot Simulation – The Matrix [17:11]

Anthony Alford: Well, Roland, you were talking about Yamaha modeling and simulating the piano before they build it to see how those design decisions affect the sound of the piano.

Roland Meertens: Yes.

Anthony Alford: So imagine you’re doing that, only you’re building a robot or an embodied agent, embodied AI. Nobody does robots anymore. Okay, so when you write code without the help of an LLM…at least when I write code, it doesn’t work 100% correctly the first time.

Roland Meertens: Oh, no.

Anthony Alford: Usually, right? So the downside of having a bug in your robot code could be pretty big. Just like spending $80,000 to create a piano that sounds terrible. You have a bug in your expensive robot. So if your code might cause the robot to run off a cliff, or even worse, the robot might hurt someone.

Roland Meertens: Yes.

Anthony Alford: And probably you’re not the only programmer. You’re a part of a team. Well, everybody wants to try out their code, but maybe you only have one robot because it’s expensive, or you might have a few, but they’re shared.

Or let’s say you’re not even programming the robot because we don’t program robots anymore. The robots learn. They use reinforcement learning. That means that the robot has to do a task over and over and over again, maybe thousands of times or more. That could take a while.

Roland Meertens: Yes.

Anthony Alford: What do we do?

Roland Meertens: I guess given the theme of this episode-

Anthony Alford: The Matrix.

Roland Meertens: The Matrix.

Anthony Alford: We use The Matrix. Okay. It’s a simulator, right? Yes. No surprise, right? Spoiler. So testing your robot control code in a simulator can be a way to address these problems. So we’re going to talk about robot simulators. There’s quite a few different ones available, just like there’s a lot of different piano simulators, and I won’t get into really specific deep dive detail. Instead, we’re going to do high level and talk about what you’ll find in most simulators.

Well, the core component is some kind of world model. The world is usually sparser and more abstract than the real world. It’s got the robot in it, and you’ll put other objects in there. That will depend on what task the robot’s for. If you’re doing an industrial pick and place robot, that’s great. All you need is the workstation and the items that the robot’s picking up. You don’t need to have obstacles and other robots it has to avoid. But if you’re simulating a mobile robot, you’ll need walls, obstacles. If it’s outside, you’ll need buildings, sand traps, people.

Typically in a simulator, you describe these objects in some sort of standard format in a file. You define their geometry and other physical properties. There’s some common file formats for these. There’s Unified Robotics Description Format, URDF. There’s a Simulation Description Format, SDF. So the idea is very similar. With a physical robot, you model it usually as a collection of links and joints. So we’re thinking about a robot manipulator that’s an arm. It has joints and bones.

Roland Meertens: And these are standards for every simulator, or is it-

Anthony Alford: Not every one, but these are for some common popular simulators. But the thing that they accomplish is going to be common to almost any simulator. You have to physically define the objects in the world. And robots in particular usually consist of links and joints. And if you mentally replace “link” with “bone” or “limb”, you could probably describe the position of your body that way, your configuration. You’ve got your arm raised or stretched out, do the YMCA.

Roland Meertens: Yes, with your links and joints.

Anthony Alford: So these attributes define the shape of the robot, how it can move. And you can specify other properties like the mass or moment of inertia of the links. And the reason you do that is because the next key ingredient is the physics engine.

So here’s where we’re calculating the motion of the robot and other objects, the forces on these objects, the forces between objects due to collisions, and things like that. So this is basically Newton’s laws. This is the part where we’re trying to protect the robot, trying to save the robot from damaging itself or from hurting other people. So this is pretty important.

Roland Meertens: Yes. The better the simulator is, the more confident you are that your robot is going to behave safely.

Anthony Alford: Yes. So one important piece of simulating robot motion is called kinematics. There’s two directions of kinematics. There’s forward kinematics, and this is where, for example, with the robot arm, you know the angles of the joints. Forward kinematics computes the 3D position and orientation of the end effector—of the “hand”.

And if you recall our previous episode, we were talking about coordinate systems. This is based on transforming coordinate systems. And by the way, the mathematics involved is matrix multiplication. So there’s the matrix tie in again.

Roland Meertens: Yes. So forward kinematics is quite easy, right? Because you know the exact angle of each motor, and then you know exactly where your robot arm is.

Anthony Alford: That’s right. The opposite of that is inverse kinematics. And that’s where you know what 3D position and orientation you want the end-effector to have. You need to figure out the joint angles to get there. And that is harder to compute because you have to invert those matrices.

So now we’ve got the robot’s motion. So we can command the robot, put the end effector at some 3D point, figure out the joints, angles, and drive it to those angles. But there’s also sensors. So the robots usually have sensors. You’ll have things like LIDAR, sonar, proximity sensors, vision cameras.

So to simulate sensor data, especially for those cameras, you need some sort of 3D graphics rendering. And what’s nice is that a lot of the info you need for the physics of the robot in the world can also be used for the graphics.

So you have to include things like colors and textures and maybe some reflectivity and light sources. And surprise, the mathematics for the 3D graphics are very similar: coordinate transformations and matrix multiplication.

Video Games and Other Simulation Frameworks [23:57]

Anthony Alford: So quiz time. What broad class of software usually includes both physics and 3D graphics besides robot simulator?

Roland Meertens: Is it video games?

Anthony Alford: It’s video games, absolutely. All technological progress is driven by video games. And in fact, some of these game creation frameworks or game engines are actually sometimes used for robotic simulation. There’s one called Unity, and there are several InfoQ news articles that cover embodied agent challenges that are built using the Unity game engine.

Roland Meertens: I’m always surprised that if you think about the costs to build an accurate representation of a world, Grand Theft Auto is actually quite good.

Anthony Alford: I wonder how many people are training their robot using Grand Theft Auto.

Roland Meertens: So it used to be quite a lot.

Anthony Alford: Oh, really?

Roland Meertens: Yes. But they explicitly make it hard to use it as an API because they don’t want people cheating in their games, which is a bit lame because it’s just already an accurate simulator of life in America.

Anthony Alford: Hey, come on! Some of it, maybe.

Roland Meertens: I will tell you this, that I first played Grand Theft Auto and I was like, “The real world can’t behave like this”. And I came to America and I was like, “Ah, some of these things are actually true”.

Anthony Alford: I’m so ashamed.

Okay. Since we’re name-dropping some simulation frameworks, I might as well mention a few others. There’s one called Gazebo.

Roland Meertens: Oh yes, I love Gazebo. Yes.

Anthony Alford: I knew you would be familiar with that. It’s an open source project. It’s maintained by Open Robotics, which also maintains ROS, the Robot Operating System. And InfoQ has a great presentation by a software engineer from Open Robotics that is about simulating with Gazebo and ROS.

Roland Meertens: Yes, I actually invited her to speak at QCon AI.

Anthony Alford: I sometimes often seem to be doing the half of the podcast that you’re the actual expert in.

Roland Meertens: Well, I mean Louise Poubel is in this case the expert, right? I just invited her.

Anthony Alford: Yes, you’re correct. Some of the big AI players, in fact, have their own simulation platforms. NVIDIA has one called IsaacSim, and that’s part of their Omniverse ecosystem.

Meta has one called Habitat, and that one has an emphasis on stimulating spaces like homes where people and robots might interact.

Roland Meertens: And is Habitat easily accessible?

Anthony Alford: It’s open source, yes. They do challenges where they invite people to build virtual robots that operate in the Habitat. Yes.

Roland Meertens: Nice.

Anthony Alford: I don’t know if they’re still doing this, but they tried to pivot to this Metaverse thing, which is basically a simulator, but for people, maybe the real matrix, the alpha version.

Roland Meertens: Yes. Meta is not super active in the robotics space, are they?

Anthony Alford: Again, they’ve built a simulation environment and they’re inviting people to build on it, so maybe.

Roland Meertens: And they already have the technology to do accurate 3D localization. They have all the ingredients to create super cool robots.

Anthony Alford: Yes. So Google is doing robotics. Google, is there anything Google doesn’t do? I don’t know.

So I mentioned ROS, and now as it happens, many of these simulations have integrations with ROS. Because the idea, remember is we’re trying out the robot control software, and that really means the entire stack, which includes the OS.

So you’ve got your control software running on the OS, and instead of the OS interacting with the real robot, sending commands to actual motors and reading physical sensors, instead it can do all that with the simulated robot.

But I keep assuming that we wrote control software. Of course, nowadays, nobody wants to hand code control algorithms. Instead, we want the robot to interact with the environment and learn its own control software. And that’s again, as I said, another reason to use simulation. Because reinforcement learning requires so many iterations.

And, no surprise, there are reinforcement learning frameworks that can integrate with the simulators. So a lot of times these are called a gym or a lab. For example, NVIDIA has a framework called Isaac Lab that works with their Isaac simulator. Meta has a Habitat Lab for Habitat, and probably a lot of us remember OpenAI’s Gym. You could play Atari games and things like that. That’s now called Gymnasium, and it integrates with Gazebo and ROS.

Roland Meertens: Oh, so the OpenAI Gymnasium now also interacts with all these other open simulators.

Anthony Alford: Well, at least somebody has built an integration. So you can find integrations. They may not be … I don’t know how official some of them are.

So the key idea with these reinforcement learning frameworks, these gyms or labs, is that they provide an abstraction that’s useful for reinforcement learning. So basically they provide you an abstraction of an environment, actions, and rewards. And because it’s operating on a simulator instead of the real world, in theory, you can do many episodes very quickly.

Roland Meertens: Yes, I tried this with OpenAI Gym. It was quite fun to play around with.

Anthony Alford: So you’ve simulated pianos to various degrees of success. You may wonder how well robot simulators work in practice. Train a robot in the simulated world, what happens when you let it loose on the real world?

Roland Meertens: What’s the reality gap?

The Reality Gap [29:42]

Yes, that’s exactly right. It’s called sim to real, the reality gap. The robot learns from an imperfect simulation, and so it behaves imperfectly in the real world. So this is an active research area, and there are different approaches.

One that popped up a couple of times in recent work is called domain randomization. Now this is where you apply randomization to different parameters in your simulation. So for example, maybe you have a random amount of friction between some surfaces, or you randomly add or subtract something from the weight or shape of a robot link. And the reason this works according to the researchers is it acts like regularization. So it keeps the model from overfitting.

Roland Meertens: Yes, by adjusting the domain just a bit.

Simulation Makes Dreams Come True [30:31]

Anthony Alford: Mm-hmm. So one final thought: earlier I brought up The Matrix.

Roland Meertens: Yes.

Anthony Alford: I was never a huge Matrix fan. I felt like it violated the laws of thermodynamics to say that the people were power sources. What if that’s not actually what happened? Because we could see in the movie that The Matrix is a simulation environment and humans can learn skills like Kung Fu very quickly, just like bam. So what if that’s what The Matrix was actually originally for.

Roland Meertens: The learning environment?

Anthony Alford: Exactly. So believe it or not, as I was preparing for this podcast, I came across an interesting biology preprint paper. The title is The Brain Simulates Actions and Their Consequences during REM Sleep. So basically the authors found that in mice, there’s a motor command center in the mouse brains, and it’s involved in “orienting movements”. It issues motor commands during REM sleep.

So these are things like turn right, turn left. And they found that although the mouse’s real physical head isn’t actually turning, the internal representation of where the mouse’s head is pointing does move. They look at certain neurons that are responsible for-

Roland Meertens: Interesting.

Anthony Alford: Yes. I don’t know how they—they’re basically reading the mouse’s mind somehow. Anyway, they suggest that “during REM sleep, the brain simulates actions by issuing motor commands that while not executed, have consequences as if they had been. The study suggests that the sleeping brain, while disengaged from the external world, uses its internal model of the world to simulate interactions with it”.

So it seems like dreams are like an organic version of The Matrix. Basically we do know that your memories and what you’ve learned do get affected by sleep, especially REM sleep.

Roland Meertens: Your dreams will come true.

Anthony Alford: They do. And so that’s a better tagline than I had. So we’re going to end it on that.

Roland Meertens: Yes, I like it. Make sleep even more important than you realize.

Anthony Alford: So there we go. Let’s sum it up: Simulations make your dreams come true.

Roland Meertens: Probably.

Anthony Alford: Bam. Headline.

Roland Meertens: Nice.

Grand Theft Auto: Columbo [32:57]

Roland Meertens: All right. Last but not least, some words of wisdom. Do you have some fun facts? Anything you recently learned which you want to share?

Anthony Alford: I don’t have something new, but something that occurred to me while you were talking about simulating musical instruments. Supposedly the theremin, which I think we’ve probably talked about before, the musical instrument you don’t touch, I believe the inventor of that was trying to at least mimic the sound of a cello.

Roland Meertens: Interesting. It’s also fascinating that with the synthesizer, people started trying to mimic existing sounds, but then it became a whole thing on its own to use those artificial sounds as its own separate …

Anthony Alford: Brand new thing: square wave, sawtooth.

Roland Meertens: Square wave. Yes. Yes. It’s its own art form now.

Anthony Alford: Definitely. And in a way, a quite dominant one.

Roland Meertens: Yes, no, definitely. And then also, if you think about different eras, if you want to have an ’80s sound, you are thinking of synthesizers, which are playing …

Anthony Alford: Yamaha FM.

Roland Meertens: The Roland 808s.

Anthony Alford: Oh yes.

Roland Meertens: Yes. No indeed. So it is interesting that the inaccuracies also become their own art form.

Anthony Alford: Very true.

Roland Meertens: Where I must say that I often, as a software engineer, strive for perfection, but maybe sometimes you can stop sooner and then just call it a new thing and then you’re also done.

Anthony Alford: It’s not a bug. It’s a feature.

Roland Meertens: On my side, I found this very interesting video yesterday. We talked about juggling robots in the first season, and I found someone called Harrison Low. He is building a robot which can juggle, and it’s quite interesting to see what he came up with because he has a super complicated system with multiple axes, which can move up and down and throw balls and catch balls again. It can also hold two balls in the air.

It doesn’t sense yet, but it’s very interesting to see his progress. So I will put that in show notes, so you can take a look at that.

Anthony Alford: I wonder if he uses a simulator.

Roland Meertens: I don’t think so, but I’m not sure.

Anthony Alford: Now that’s hardcore.

Roland Meertens: Yes. I guess he has some kind of 3D model, so at least knows how different things impact his 3D model.

Anthony Alford: One would assume.

Roland Meertens: Yes. It’s just annoying with the simulations that the devil always seems to be in the details, that tiny, tiny things tend to be really important.

Anthony Alford: Maybe he could run it in GTA.

Roland Meertens: Yes.

One fun thing, by the way, talking about the reality gap is that I always notice in my machine learning career that the things you think matter are not the things which actually matter.

Anthony Alford: Ah, yes.

Roland Meertens: At some point, for example, with self-driving cars, people ask me, “Oh, what’s important to simulate?” And I was like, “Oh, I don’t know”.

Anthony Alford: You’ll find out.

Roland Meertens: Yes, like, “Oh, I don’t know, like windows are maybe transparent or something”. We noticed at some point that a neural network, we trained to recognize cars. It really knew that something was a car because of the reflective number plates.

But if you ask me what’s the most defining feature of a car, I would be like, “I don’t know, like it has a hood, has four wheels”. Nobody ever says reflective number plates.

Anthony Alford: Interesting.

Roland Meertens: Yes. Shows again that you never know what you really want to simulate accurately.

I guess that’s it for our episode. Thank you very much for listening to Generally AI, an InfoQ podcast. Like and subscribe to it, share it with your family, share it with your friends.

Anything to add to that, Anthony?

Anthony Alford: No, this was a fun one. I really enjoyed this one.

Roland Meertens: All right.

Anthony Alford: I enjoy all of them, but this one was especially enjoyable.

Roland Meertens: Cool. And my apologies to America for comparing you to Grand Theft Auto. And my apologies to Sri Lanka for saying that 10% of drivers are reckless.

Anthony Alford: Grand Theft Auto Colombo.

Roland Meertens: Yes. Well, I guess you could easily say-

Anthony Alford: Show title.

Roland Meertens: Show title. Cool. I think that’s it for today.

Anthony Alford: Oh, too funny.

Roland Meertens: It is too funny.

Mentioned:

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Central Package Management Now Available in .NET Upgrade Assistant

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

The .NET Upgrade Assistant team has recently introduced a significant upgrade: the Central Package Management (CPM) feature. This new capability enables .NET developers to manage dependencies more effectively, streamlining the upgrade process and maintaining consistency across various projects within a solution. The tool is available as a Visual Studio extension and a command-line interface (CLI), making it easier for developers to transition to CPM and stay updated with the latest .NET versions.

The addition of support for Centralized Package Management in the .NET Upgrade Assistant responds to the community’s need for enhanced package management solutions. A few months ago, a Reddit user shared their experience with the tool:

I used it professionally in several WPF apps, and it has worked great. The only remaining issue I had to fix was sorting out NuGet packages.

To upgrade to CPM using Visual Studio, developers can right-click a project in Solution Explorer and select the Upgrade option, then choose the CPM feature. The upgrade process allows for the selection of additional projects in the solution, promoting centralized management of package versions. Furthermore, users can enable transitive pinning for consistent dependency management and specify the centralized path for package storage.


Source: Microsoft Blog

For those who prefer command-line operations, the CLI also supports upgrading to CPM. By navigating to the solution directory in Command Prompt and executing upgrade-assistant upgrade, developers can select the projects they wish to upgrade and straightforwardly confirm their CPM settings.

Upon completion of the CPM upgrade, the .NET Upgrade Assistant consolidates package versions into a single Directory.packages.props file. This change significantly reduces redundancy and simplifies dependency tracking across projects. The tool has also improved dependency discovery by editing references directly in central files.

The introduction of CPM has received positive feedback from the community. For example, Alexander Ravenna wrote

This sounds incredibly useful! We have a solution with dozens of projects, and central package management could help us manage version upgrades a lot more easily!

The latest update for Upgrade Assistant now requires Visual Studio version 17.3 or higher, changing from the previous minimum of 17.1. This update is necessary for security reasons, so users with versions below 17.3 should upgrade. As a result, Upgrade Assistant will no longer work with versions older than 17.3.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: gRPC Migration Automation at LinkedIn

MMS Founder
MMS Karthik Ramgopal Min Chen

Article originally posted on InfoQ. Visit InfoQ

Transcript

Ramgopal: What do we do today at LinkedIn? We use a lot of REST, and we built this framework called Rest.li. It’s because we didn’t like REST as it was in the standard open-source stuff, we wanted to build a framework around it. Primarily Java based users like JSON, familiar HTTP verbs. We thought it’s a great idea. It’s worked fairly well for us. It’s what is used to primarily power interactions between our microservices, as well as between our client applications and our frontends, and as well as our externalized endpoints. We have an external API program. Lots of partners use APIs to build applications on LinkedIn. We have, over the years, started using Rest.li in over 50,000 endpoints. It’s a huge number, and it keeps growing.

Rest.li Overview

How does Rest.li work? We have this thing called Pegasus Data Language. It is a data schema language. You go and author your schemas in Pegasus Data Language, schema describes the shape of your data. From it, we generate what are called record template classes. This is a code generated programming language friendly binding for interacting with your schemas. Then we have these resource classes. Rest.li is weird, where, traditionally in RPC frameworks, you start IDL first, and you go write your service definitions in an IDL.

We somehow thought that starting in Java is a great idea, so you write these Java classes, you annotate them, and that’s what your resource classes are. From it, we generate the IDL. Kind of reverse, and it is clunky. This IDL is in the form of a JSON. All these things combined together in order to generate the type safe request builders, which is how the clients actually call the service. We have these Java classes which are generated, which are just syntactic sugar over HTTP, REST under the hood, but this is how the clients interact with the server. The client also uses the record template bindings. We also generate some human readable documentation, which you can go explore and understand what APIs exist.

Gaps in Rest.li

Gaps in Rest.li. Back in the day, it was pretty straightforward synchronous communication. Over the years, we’ve needed support for streaming. We’ve needed support for deferred responses. We’ve also needed support for deadlines, because our stack has grown deeper and we want to set deadlines from the top. Rest.li does not support any of these things. We also excessively use reflection, string interpolation, and URI encoding. Originally, this was done for simplicity and flexibility, but this heavily hurts performance. We also have service stubs declared as Java classes. I talked about this a bit before. It’s not great. We have very poor support for non-Java servers and clients. LinkedIn historically has been a Java shop, but of late, we are starting to use a lot of other programming languages.

For our mobile apps, for example, we use a lot of Objective-C, Swift on iOS, Java, Kotlin on Android. We’ve built clients for those. Then on the website, we use JavaScript and TypeScript, we’ve built clients for those. On the server side, we are using a lot of Go in our compute stack, which is Kubernetes based, as well as some of our observability pieces. We are using Python for our AI stuff, especially with generative AI, online serving, and stuff. We’re using C++ and Rust for our lower-level infrastructure. Being Java only is really hurting us. The cost of building support for each and every programming language is prohibitively expensive. Although we open sourced it, it’s not been that well adopted. We are pretty much the only ones contributing to it and maintaining it, apart from a few enthusiasts. It’s not great.

Why gRPC?

Why gRPC? We get bidirectional streaming and not just unidirectional streaming. We have support for deferred responses and deadlines. The cool part is we can also take all these features and plug it into higher level abstractions like GraphQL, for example. It works really well. We have excellent out of the box performance. We did a bunch of benchmarking internally as well, instead of trusting what the gRPC folks told us. We have declarative service stubs, which means no more writing clunky Java classes. You write your RPC service definitions in one place, and you’re done with it. We have great support for multiple programming languages.

At least all the programming languages we are interested in are really well supported. Of course, it has an excellent open-source community. Google throws its weight behind it. A lot of other companies also use it. We are able to reduce our infrastructure support costs and actually focus on things that matter to us without worrying about this stuff.

Automation

All this is great. We’re like, let’s move to gRPC. We go to execs, and they’re like, “This is a lot of code to move, and it’s going to take a lot of humans to move it.” They’re like, “No, it’s too expensive. Just don’t do it. The ROI is not there.” In order to justify the ROI, we had to come up with a way to reduce this cost, because three calendar years involving so many developers, it’s going to hurt the business a lot. We decided to automate it. How are we automating it?

Right now, what we have is Rest.li, essentially, and in the future, we want to have pure gRPC. What we need is a bridge between the two worlds. We actually named this bridge as Alcantara. Bridged gRPC is the intermediary strait where we serve both gRPC and Rest.li, and allow for a smooth transition from the pure Rest.li world to the pure gRPC world. Here we are actually talking gRPC over the wire, although we are serving Rest.li. We have a few phases here. In stage 1, we have our automated migration infrastructure. We have this bridged gRPC mode where we are having our Rest.li resources wrapped with the gRPC layer, and we are serving both Rest.li and gRPC. In stage 2, we start moving the clients from Rest.li to gRPC using a configuration flag, gradually shifting traffic. We can slowly start to retire our Rest.li clients. Finally, we can also start to retire the server, once all the clients are migrated over, and we can start serving pure gRPC and retire the Rest.li path.

gRPC Bridged Mode

This is the bridged mode at a high level, where we are essentially able to have Rest.li and gRPC running side by side. It’s an intermediary stepping stone. More importantly, it unlocks new features of gRPC for new endpoints or evolutions of existing endpoints, without requiring the services to do a full migration. We use a gRPC protocol over the wire. We also serve Rest.li for folks who still want to talk Rest.li before the migration finishes. We have a few principles here. We wanted to use codegen for performance in order to ensure that the overhead of the bridge was as less as possible.

We also wanted to ensure that this was completely hidden away in infrastructure without requiring any manual work on the part of the application developers, either on the server or the client. We wanted to decouple the client adoption and the server adoption so that we could ramp independently. Obviously, server needs to go before the client, but the ramp, we wanted to have it as decoupled as possible. We also want to do a gradual client traffic ramp to ensure that if there are any issues, bugs, problems introduced by a bridging infrastructure, they are caught early on, instead of a big bang, and we take the site down, which would be very bad.

Bridge Mode (Deep Dive)

Chen: Next I would go through the deep dive, what does really the bridge mode look like in this one. This includes two parts, server side and client side, as we mentioned. We have to decouple this part, because people are still developing, services are running in production. We cannot disrupt them. I will talk about server migration first. Before server migration, each server has one endpoint, curli, is Rest.li, we expose there. That’s the initial, before server. After running the bridge, this is the bridge mode server. In this bridge mode server, you can see inside the same JVM, we autogenerated a gRPC service there. This gRPC service dedicated to the Rest.li service call to complete a task. Zooming to this gRPC service, what we are doing there is this full step there to do the conversion.

First from the proto-to-pdl, that’s our data model in the Pegasus input, and then translate your gRPC request to a Rest.li request, and through the in-process call to the Rest.li resource to complete task, get a response back and translate back to the gRPC. This is a code snippet for the autogenerated gRPC service. This is a GET call. Internally, we filled in blank here. We first do this, what we just discussed. gRPC request comes in, I need to translate to a Rest.li request, and then make a Rest.li request in-process call to the Rest.li resource to do the job, afterwards we translate Rest.li response to gRPC response. Remember here, this is the in-process call. Why an in-process call? Because, otherwise, you make a remote call, you have an actual hop. That is the tradeoff we think about there.

Under the Hood: PDL Migration

Now I need to talk a little bit under the hood exactly what it is we are doing down in the automation migration framework. There’s two parts, as we discussed before in Rest.li. We have a PDL, that’s data model. Then we also have IDL, that’s API. Under the hood, we do two migrations. First, we build tooling to do the PDL migration. We coined a term called protoforming. In the PDL migration, under the hood, you are doing this. You start with Pegasus, that’s PDL. We built a pegasus-to-proto schema translator to get back to a proto schema. Proto schema will become your source of truth because the developer, after migration, they will work with native gRPC developer environment, not working with PDL anymore.

Source of truth is proto schema. They will evolve there. Source of truth proto schema, we’re using the proxy compiler to compile all different multi-language artifact for the proto. After, because the developer work on the proto, but the underlying service is still Rest.li, how this works. When people evolve the proto, we build a proto-to-pegasus reverse translator to get back to Pegasus. Then all our Pegasus tuning can continue work to get all the Pegasus artifact, so not impact any business logic. This is the schema part, but there’s a data part. For the data part, we built the interop generator to build a bridge between Pegasus data to proto data, so that this Pegasus data bridge can work with both proto binding and Pegasus binding to do the data conversion. That is under the hood, what is protoforming for the PDL.

There’s a complication for this, because there’s a feature not complete parity between the Pegasus PDL and the proto spec. There’s official gaps. For example, in Pegasus, we have includes, more like inheritance in a certain way. It’s a weird inheritance, more like a macro embedded there. Proto doesn’t have this. We also have required, optional. In Pegasus, people define which one field is required, which is optional. In proto3 we all know gRPC got rid of this. Then we also have custom default so that people can specify for each field, what is their default value for this.

Proto doesn’t have it. We also have union without alias. That means you can have a union kind of a different type, but there’s no alias name for each one of this. Proto all require to have all this named alias here. We also have custom type to say fixed. Fixed means you have defined a UUID, have the fixed size. We also can define a typeref. Means you can have an alias type to a ladder type. All these things don’t exist in proto. In Pegasus, the important part, we also allow cycle import, but proto doesn’t allow this.

All this feature parity, how do we bridge them? The solution is to introduce Pegasus custom option. Here’s an example I’ll show you. This is a greeting, simple greeting Pegasus model, so you have required field, optional field. After data model protoforming, we generated a proto. As you can see, all these fields have Pegasus options defined there, for example, required and optional there. We are using that in our infra to actually generate this Pegasus validator to mimic the parity we had before the protoforming, so to throw some kind of a required field exception if you have some field not specified.

That’s how we use our custom Pegasus option to bridge gap about required, optional. We have all the other Pegasus validators for the cycle import and for the fixed size. They’re all using our custom option hint to generate this. Note this, all this class is autogenerated, and we define the interface when everything is autogenerated there. We have the schema bridged and mapped together. Now you need to bridge the data. This is a data model bridge, we introduced Pegasus bridge interface, and for each data model, we autogenerated the Pegasus data bridge here, the bridge between the proto object and the Pegasus object.

Under the Hood: API Migration

Now talk about API migration. API we just talk about is IDL we use in the Rest.li, called the IDL, basically so API contract. We have this similar flow defined for the IDL protoforming. You start with IDL Pegasus. We have it built-in for the rest.li-to-grpc service translator to translate IDL to a proto service, service proto. That becomes your source of truth later on, because in your source code, after this migration, your code doesn’t see this IDL anymore. They are not in the source control. Developer facing is proto. With this proto, you are using gRPC compiler to give you all the compiled gRPC service artifact and the service stub. Now when developer evolve your proto service, so underlying the Rest.li resource in action, so we need to build a proto-to-rest.li service backporter to backport your service API change to your Rest.li service signature so that a developer can evolve their Rest.li resource to make the feature change.

Afterwards, all your Pegasus plugin kick in to generate your derived IDL. Every artifact works same as before, your client is not impacted at all when people evolve this service. Of course, then we also build a service interop generator to do the bridge between the request-response, because when you interact API, they send in through a Rest.li request, and over the wire, they’re sending gRPC requests. We need a bridge between request and also response. This bridged gRPC service make in-process call to the evolved Rest.li resource to complete task. This is the whole flow, how under the hood, API migration works. Everything is also completely automated. There’s nothing developers need to evolve there. Only thing to evolve is when they need to change the endpoint function. Then you need to go to evolve the Rest.li resource to fill in your business logic.

I’ll give you some examples, how this API protoforming works. This is the IDL, purely JSON generated from the Java Rest.li resource, and they show you what is supported in my Rest.li endpoint. This is auto-translated, the gRPC service proto. To show you, each one wraps inside a request-response, and also have the Pegasus custom option to help us to generate later on the Pegasus request-response bridge and the client bridge.

To give an example, similar to a data model bridge, we generate a request bridge to translate from a Pegasus GetGreetingRequest, to a proto getting request. The response, same thing. This is bidirectional. This is response part. This is the autogenerated response from Pegasus to proto, proto to Pegasus, bidirectional. Because, bidirectional, as I illustrated before, they were using one direction, using in the server, the other direction in using the client, they have to echo together.

gRPC Client Migration

Now I finished talking about Server migration. Let’s talk a little bit about client migration, because we know when a server migrate, client is still Rest.li, needs to still work. They are decoupled. Client migration, remember, before, this is our server, purely Rest.li. How the client works? We have a factory generated Rest.li client to make a curli call to your REST endpoint. When we do this server bridge, you expose two endpoints now, both running side by side inside the same JVM. What we did for the client bridge, you introduce a facet we call the Rest.liOverGrpcClient. What this facet does, it looks at the service, if you migrate or non-migrate, if you non-migrate, it goes through the old paths. Rest.li client goes through the curli.

If I migrate, this client bridge class goes through the same conversion from the pdl-to-proto and rest.li-to-grpc, then send over wire gRPC request, coming back, gRPC response goes through the grpc-to- rest.li response converter, back to your client. This way the client bridge handles both traffic seamlessly. This is what client bridge looks like. This client bridge only exposed one execute request. Give you Pegasus request. I change that Rest.li request to the gRPC request and send over the wire using the gRPC stub call. Because in the gRPC, everything promotes type safety. You need to use the client stub to really make the call. Then after, back, we’re using the gRPC response to Rest.li response bridge, get back to your client. We call that Pegasus client bridge.

Bridged Principles

To sum up, we repeated before, why we use the bridge. There are several principles we highlight here. First principle, as you can see, we use codegen a lot. Why do we use codegen? Because of performance. You can use refactor as well, in Rest.li, we suffered a lot there. We do all the codegen for the bridge there, for the performance. There’s no manual work for your server and the client. Run the migration tool, people don’t change their code, your traffic can automatically switch.

Another important part, we decoupled server and client gRPC switch, because we have to handle the people continually developing. When your migration server is still working, a client is still sending traffic, we cannot disrupt any business logic here. It’s very important to decouple server and client gRPC switch there. Then, finally, we allow the gradual client traffic ramp, because we want a gradual ramp in traffic, shifting traffic from Rest.li to gRPC, so that we can get immediate feedback, and reramping and to see their degradation or regression, we can ramp down, to not disrupt our business traffic here.

Pilot Proof (Profiles)

In theory, that all works, and accuracy, everything looks fine. Is it really working in real life? We did a pilot. We picked the most complicated service endpoint running in LinkedIn, that’s profiles. Everybody uses that. Because that is an endpoint to serve all the LinkedIn member profiles, have the most high QPS. We run the bridge automation on this endpoint. This is the performance benchmark we did after, side by side with previous Rest.li and gRPC. As you can see, our client server latency with bridge mode doing the actual work, not only didn’t degrade, actually getting better, because we have more efficient protobuf encoding than our Rest.li to Pegasus. As you can see, there’s a slight increase in memory usage. That’s understandable, because we are running both parts in the same JVM. That is a slight change there. We verify this works before we roll out mass migration.

Mass Migration

It is simple to run some IDLs on one MP service, we call the run report service endpoint. Think about, we have 50,000 endpoints, 2,000 services running in LinkedIn, how to do this in a mass migration way without disrupting is a very challenging task. Before I start, I want to talk about the challenge here. Unlike most industry companies, LinkedIn actually doesn’t use monorepo. We are using multi-repo. What that means is that is a decentralized developer environment and deployment. Each service in their own repo, it has their own pro. Pro is because of flexible development. Everybody is decoupled from the other, it’s not dependent on their thing. It also caused issues, and the most difficult part of our mass migration, because, first, every repo has to be green build.

You have to build successfully before I can run the migration. Otherwise, I don’t know, is my migration causing the build fail, or your build has already failed before? Also, deployment, because I want to also make sure after migration, I need to deploy so that I can ramp in traffic. If people don’t deploy their code, that is bad code, I cannot take any effect to ramping my traffic, ship traffic. Deployment freshness also needs to be there.

Also, with large scale like this, we need to have a way to track, what is migrated and what is non-migrated, what’s the progress, what’s error? This migration tracker will need to be in place. Last but not least, we need to have up-to-date dependency, because different multi-repo, they are dependent on each other. You need to bring up-to-date dependency correctly, so that we apply our latest infrastructure change.

That is all the prerequisite we need to be finished before we can start mass migration. This is a diagram to show you how complicated this multi-repo brings a challenge to our mass migration because, as we say, PDL is data model, IDL is API model. The PDL, just like proto, each MP, each repo service can define their data model, and the other service can import this PDL into their repo and compose a ladder PDL to import by a ladder repo, of course, and the API IDL also will depend on your PDL.

To make things more complicated in Rest.li, people can actually define a PDL in one API repo and then define their implementation in a ladder repo. The implementation repo will depend on API multi-repo. All this complication, data model dependencies, service dependency, will bring all the complicated dependency ordering. We have to consider that’s up to 20 levels deep, because you have to first protoform your dependent PDL first, then you protoform another one. Otherwise, you cannot import your proto. That is why we have to have a complicated dependency ordering figured out to make sure we migrate them in the right order, in the right sequence.

To ensure all the target change to propagate to all our multi-repo, we built a high-level automation. We built a high-level automation called the grpc-migration-orchestrator. grpc-migration-orchestrator, actually eating our own dogfood. We are developing that service using gRPC itself, so it consists of some component, the most important part, dashboard. Everybody knows, we need to have a dashboard to track, which MP is migrate, and are you ready to migrate? Which state are they in? Are they in the PDL protoforming, or are they in the IDL protoforming? Are they in the post-processing, all these things?

We have a database to track that, and we’re using a dashboard to display that. Our gRPC service endpoint to do all the actions to execute pipeline, go through all the PDL, IDL, and work with our migration tool, and have a job runner framework to do the non-running job. To make sure extra additional more complication beside PDL and IDL migration, because you expose a new endpoint, we need to allocate a port. That’s why we need to talk to the porter to assign a gRPC port for you. We also need to talk to our acl-tool to enable authentication, authorization to the new endpoint, that ACL migration.

We also have the service discovery configuration for your new endpoint, so that they can do all the load balancing service discovery for your migrate gRPC endpoint, same as before for the Rest.li endpoint. This is all the component. We built this orchestrator to handle this automatically.

Besides that, we also built a dry run process to simulate mass migration so that we can preemptively discover bugs earlier, instead we found this in the mass migration when people are in the stack, the developer will get stuck there. This dry run process workflow included both automated and manual work. Automated part, we have a planner to basically do the offline data mining, to figure out the list of the repo to migrate based on the dependency. Then we do run this automation framework, the infra code on a remote developer cluster.

After that, we analyze log and aggregate the categories error. Finally, each daily run, we capture snapshot execution on a remote drive so that we can analyze and reproduce easily. Every day, we have regression report generated from our daily run. Developers can pick up this report to see the regression fixed bug, and this becomes a [inaudible 00:31:29] every day. This is the dry run process we developed so that it gives us more confidence before we kick off a mass migration, a whole company. As we mentioned before, gRPC bridge is just a stepping stone for us to get to the end state and to help us to not disrupt the business running. Our end goal is to go to gRPC Native.

gRPC Native

Ramgopal: We first want to get the client off the bridge. The reason we want to get the client off the bridge is because once you have moved all the code via the switch we described earlier, to gRPC, you should be able to clean up all the Rest.li code, and you should be able to directly use gRPC. On the server, it’s decoupled, because we have the gRPC facet. If you rewrite the facet, or if it’s using Rest.li under the hood, it doesn’t really matter, because the client is talking gRPC. We want to go from what’s on the left to what’s on the right. It looks fairly straightforward, but it’s not because of all these differences, which we described before.

We also want to get the server off the bridge. A prerequisite for this is, of course, that all the client traffic gets shifted. Once the client traffic gets shifted, we have to do few things. We have to delete those ugly looking Pegasus options, because they no longer make sense. All the traffic is gRPC. We have to ensure that all the logic they enabled in terms of validation, still stays in place, either in application code or elsewhere. We also have to replace this in-process Rest.li call which we were making with the ported over business logic behind that Rest.li resource, because otherwise that business logic is not going to execute. We need to delete all the Rest.li artifacts, like the old schemas, the old resources, the request builders, all that needs to be gone. There are a few problems like we described here.

We spoke about some of the differences between Pegasus and proto, which we tried to patch over with options. There are also differences in the binding layer. For example, the Rest.li code uses mutable bindings where you have getters and setters. GRPC bindings, though, are immutable, at least in Java. It’s a huge paradigm shift. We also have to change a lot of code, 20 million lines of code, which is pretty huge. What we’ve seen in our local experimentation is that if we do an AST, like abstract syntax tree based code mod to change things, it’s simply not enough. There are so many nuances that the accuracy rate is horrible. It’s not going to work. Of course, you can ask humans to do this themselves, but as I mentioned before, that wouldn’t fly. It’s very expensive. We are taking the help of generative AI.

We are essentially using both foundation models as well as fine-tuned models in order to ask AI, we just wave our hands and ask AI to do the migration for us. We have already used this for a few internal migrations. For example, just like we built Rest.li, long ago, we built another framework called Deco for functionality similar to GraphQL. Right now, we’ve realized that we don’t like Deco, we want to go to GraphQL. We’ve used this framework to actually try and migrate a lot of our code off Deco to GraphQL.

In the same way on the offline side, we were using Pig quite a bit, and right now, we’re going to Hive and we’re going to Spark. We are using a lot of this generative AI based system in order to do this migration. Is it perfect? No, it’s about 70%, 80% accurate. We are continuously improving the pipeline, as well as adopting newer models to increase the efficacy of this. We are still in the process of doing this.

Key Takeaways

What are the key takeaways? We’ve essentially built an automation framework, which right now is taking us to bridge mode, and we are working on plans to get off the bridge and do the second step. We are essentially switching from Rest.li to gRPC under the hood, without interrupting the business. We are undertaking a huge scope, 50,000 endpoints across 2,000 services, and we are doing this in a compressed period of 2 to 3 quarters with a small infrastructure team, instead of spreading the pain across the entire company over 2 to 3 years.

Questions and Answers

Participant 1: Did this cause any incidents, and how much harder was it to debug this process? Because it sounds like it adds a lot of complexity to transfer data.

Ramgopal: Is this causing incidents? How much complexity does it add to debug issues because of this?

We’ve had a few incidents so far. Obviously, any change like this without any incidents would be almost magical. The real world, there’s no magic. Debugging actually has been pretty good, because in a lot of these automation things which we show, it’s all code which you can step through. Often, you get really nice traces, which you can look at and try to understand what happened. What we did not show, which we’ve also automated, is, we’ve automated a lot of the alerts and the dashboards as part of the migration.

All the alerts we were having on the Rest.li side you would also get on the gRPC side in terms of error rates and stuff. We’ve also instrumented and added a lot of logging and tracing. You can look at the dashboards and know exactly what went wrong, where. We also have this tool which helps you associate a timeline of when major changes happened. For example, if a service is having errors after we started routing gRPC traffic to it, we have the timeline analyzer to know, correlationally, this is what happened, so most likely this is the problem. Our MTTR, MTTD, has actually been largely unaffected by this change.

Chen: We also built actual dark cluster infrastructure along with that, so that the actual team, when they migrate, they can set up a dark cluster to duplicate traffic before real ramping.

Participant 2: In your schema, or in your migration process, you went from the PDL, you generated gRPC resources, and said, that’s the source of truth, but then you generated PDL again. Is it because you want to change your gRPC resources and then have these changes flow back to PDL, so that you only have to change this one flow and get both out of it.

Ramgopal: Yes, because we still will have few services which will be using Rest.li, because they haven’t undergone the migration yet, few clients, and you still want them to see new changes to the APIs.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


PyTorch 2.5 Release Includes Support for Intel GPUs

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

The PyTorch Foundation recently released PyTorch version 2.5, which contains support for Intel GPUs. The release also includes several performance enhancements, such as the FlexAttention API, TorchInductor CPU backend optimizations, and a regional compilation feature which reduces compilation time. Overall, the release contains 4095 commits since PyTorch 2.4.

The Intel GPU support was previewed at the recent PyTorch conference. Intel engineers Eikan Wang and Min Jean Cho described the PyTorch changes made to support the hardware. This included generalizing the PyTorch runtime and device layers which makes it easier to integrate new hardware backends. Intel specific backends were also implemented for torch.compile and torch.distributed. According to Kismat Singh, Intel’s VP of engineering for AI frameworks:

We have added support for Intel client GPUs in PyTorch 2.5 and that basically means that you’ll be able to run PyTorch on the Intel laptops and desktops that are built using the latest Intel processors. We think it’s going to unlock 40 million laptops and desktops for PyTorch users this year and we expect the number to go to around 100 million by the end of next year.

The release includes a new FlexAttention API which makes it easier for PyTorch users to experiment with different attention mechanisms in their models. Typically, researchers who want to try a new attention variant need to hand-code it directly from PyTorch operators. However, this could result in “slow runtime and CUDA OOMs.” The new API supports writing these instead with “a few lines of idiomatic PyTorch code.” The compiler then converts these to an optimized kernel “that doesn’t materialize any extra memory and has performance competitive with handwritten ones.”

Several performance improvements have been released in beta status. A new backend Fused Flash Attention provides “up to 75% speed-up over FlashAttentionV2” for NVIDIA H100 GPUs. A regional compilation feature for torch.compile reduces the need for full model compilation; instead, repeated nn.Modules, such as Transformer layers, are compiled. This can reduce compilation latency while incurring only a few percent performance degradation. There are also several optimizations to the TorchInductor CPU backend.

Flight Recorder, a new debugging tool for stuck jobs, was also included in the release. Stuck jobs can occur during distributed training, and could have many root causes, including data starvation, network issues, or software bugs. Flight Recorder uses an in-memory circular buffer to capture diagnostic info. When it detects a stuck job, it dumps the diagnostics to a file; the data can then be analyzed using a script of heuristics to identify the root cause.

In discussions about the release on Reddit, many users were glad to see support for Intel GPUs, calling it a “game changer.” Another user wrote:

Excited to see the improvements in torch.compile, especially the ability to reuse repeated modules to speed up compilation. That could be a game-changer for large models with lots of similar components. The FlexAttention API also looks really promising – being able to implement various attention mechanisms with just a few lines of code and get near-handwritten performance is huge. Kudos to the PyTorch team and contributors for another solid release!

The PyTorch 2.5 code and release notes are available on GitHub.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Sold by Raymond James & Associates – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Raymond James & Associates lowered its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 21.3% in the 3rd quarter, according to its most recent 13F filing with the SEC. The firm owned 44,082 shares of the company’s stock after selling 11,908 shares during the period. Raymond James & Associates owned about 0.06% of MongoDB worth $11,918,000 at the end of the most recent quarter.

Several other hedge funds and other institutional investors have also made changes to their positions in the stock. Sunbelt Securities Inc. lifted its holdings in shares of MongoDB by 155.1% in the 1st quarter. Sunbelt Securities Inc. now owns 125 shares of the company’s stock valued at $45,000 after acquiring an additional 76 shares during the last quarter. Diversified Trust Co raised its stake in shares of MongoDB by 19.0% during the first quarter. Diversified Trust Co now owns 4,111 shares of the company’s stock valued at $1,474,000 after acquiring an additional 657 shares in the last quarter. Sumitomo Mitsui Trust Holdings Inc. raised its stake in shares of MongoDB by 2.3% during the first quarter. Sumitomo Mitsui Trust Holdings Inc. now owns 182,727 shares of the company’s stock valued at $65,533,000 after acquiring an additional 4,034 shares in the last quarter. Azzad Asset Management Inc. ADV increased its holdings in MongoDB by 650.2% during the first quarter. Azzad Asset Management Inc. ADV now owns 5,214 shares of the company’s stock valued at $1,870,000 after buying an additional 4,519 shares during the period. Finally, BluePath Capital Management LLC purchased a new position in MongoDB during the first quarter valued at approximately $203,000. Institutional investors own 89.29% of the company’s stock.

Insiders Place Their Bets

In other MongoDB news, CFO Michael Lawrence Gordon sold 5,000 shares of the stock in a transaction that occurred on Monday, October 14th. The shares were sold at an average price of $290.31, for a total value of $1,451,550.00. Following the sale, the chief financial officer now directly owns 80,307 shares of the company’s stock, valued at approximately $23,313,925.17. This represents a 0.00 % decrease in their position. The sale was disclosed in a document filed with the SEC, which is available at this link. In related news, CFO Michael Lawrence Gordon sold 5,000 shares of the stock in a transaction on Monday, October 14th. The shares were sold at an average price of $290.31, for a total value of $1,451,550.00. Following the transaction, the chief financial officer now directly owns 80,307 shares of the company’s stock, valued at $23,313,925.17. This trade represents a 0.00 % decrease in their position. The transaction was disclosed in a filing with the SEC, which is accessible through the SEC website. Also, Director Dwight A. Merriman sold 1,385 shares of the stock in a transaction on Tuesday, October 15th. The shares were sold at an average price of $287.82, for a total transaction of $398,630.70. Following the completion of the transaction, the director now directly owns 89,063 shares in the company, valued at $25,634,112.66. This trade represents a 0.00 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold a total of 23,281 shares of company stock worth $6,310,411 in the last ninety days. 3.60% of the stock is owned by corporate insiders.

Wall Street Analysts Forecast Growth

Several research firms have issued reports on MDB. Scotiabank lifted their price target on shares of MongoDB from $250.00 to $295.00 and gave the stock a “sector perform” rating in a report on Friday, August 30th. Mizuho lifted their price target on shares of MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a report on Friday, August 30th. Truist Financial lifted their price target on shares of MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a report on Friday, August 30th. Piper Sandler lifted their price target on shares of MongoDB from $300.00 to $335.00 and gave the stock an “overweight” rating in a report on Friday, August 30th. Finally, Royal Bank of Canada reaffirmed an “outperform” rating and issued a $350.00 price target on shares of MongoDB in a report on Friday, August 30th. One research analyst has rated the stock with a sell rating, five have assigned a hold rating, twenty have issued a buy rating and one has given a strong buy rating to the company. According to MarketBeat.com, the stock presently has a consensus rating of “Moderate Buy” and an average target price of $337.96.

Read Our Latest Report on MDB

MongoDB Price Performance

Shares of MDB stock traded up $3.03 on Tuesday, reaching $275.21. 657,269 shares of the company were exchanged, compared to its average volume of 1,442,306. The company has a market cap of $20.19 billion, a price-to-earnings ratio of -96.86 and a beta of 1.15. The business has a 50-day moving average price of $272.87 and a 200 day moving average price of $280.83. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. MongoDB, Inc. has a 1 year low of $212.74 and a 1 year high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The company had revenue of $478.11 million during the quarter, compared to analysts’ expectations of $465.03 million. During the same quarter in the prior year, the company earned ($0.63) EPS. MongoDB’s revenue was up 12.8% compared to the same quarter last year. Equities research analysts forecast that MongoDB, Inc. will post -2.39 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Growth stocks offer a lot of bang for your buck, and we’ve got the next upcoming superstars to strongly consider for your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The HackerNoon Newsletter: Teslas Good Quarter (10/28/2024)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

How are you, hacker?

🪐 What’s happening in tech today, October 28, 2024?

The
HackerNoon Newsletter
brings the HackerNoon
homepage
straight to your inbox.
On this day,

Cuban Missile Crisis Ends in 1962, St. Louis’s Gateway Arch is Completed in 1965, Italy Invades Greece in 1940,

and we present you with these top quality stories.

From
Timing is Important for Startups and Product Launches
to
Teslas Good Quarter,
let’s dive right in.

By @docligot [ 5 Min read ] Generative AI threatens journalism, education, and creativity by spreading misinformation, displacing jobs, and eroding quality. Read More.

By @companyoftheweek [ 2 Min read ] This week, HackerNoon features MongoDB — a popular NoSQL database designed for scalability, flexibility, and performance. Read More.

By @nebojsaneshatodorovic [ 3 Min read ] The Anti-AI Movement Is No Joke. Can you think of a movie where AI is good? Read More.

By @pauldebahy [ 4 Min read ] Many great startups and products were launched too early to too late, impacting their potential for success and leaving them vulnerable to competitors. Read More.

By @sheharyarkhan [ 4 Min read ] Teslas Q3 results couldnt have come at a better time as Wall Street was becoming increasingly concerned with the companys direction. Read More.

🧑‍💻 What happened in your world this week?

It’s been said that
writing can help consolidate technical knowledge,
establish credibility,
and contribute to emerging community standards.
Feeling stuck? We got you covered ⬇️⬇️⬇️

ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME

We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who’ll love you for it.See you on Planet Internet! With love,
The HackerNoon Team ✌️

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.