Category: Uncategorized
Presentation: Adaptable Innovation: How Microsoft Leverages React Native for Strategic Advantage
MMS • Lorenzo Sciandra
Article originally posted on InfoQ. Visit InfoQ
Transcript
Sciandra: Where I want to start from is a very inspirational quote from my grandmother. She used to tell me that picking a technology is like a box of chocolates, you never know what you’re going to get. Maybe she never actually said those words. Probably no one ever has ever said those words. I think there’s something true in there, which is that sometimes we have to face some decisions. We need to work on a project, work on a product, work on an app, work on an experience, and we need to take some technical decisions. In certain situations, we take these decisions based on the information we have around us. Then, maybe six months later, this decision blows back, and it explodes in our face. What I want to do is actually talk a bit about how we’ve been successfully using React Native for over six years now at Microsoft. To do that, I’m actually going to do two parts of this. I’m going to talk about some technical reasons, but I also want to talk about some non-technical reasons. Hopefully, I will give you a list of considerations to leverage for your next meeting.
I’m Lorenzo Sciandra. I’m a senior software engineer at Microsoft. I’ve been a React Native maintainer from 2018, so I’m biased, but hopefully all the things that we’ll describe you can also apply to other technologies.
Adaptability
Let’s talk about adaptability. Let’s talk about the technical reasons why React Native makes sense in certain situations. How many of you are using React Native right now, do know about React Native? This is pretty much all that you’re going to need to know about React Native. It’s basically a way to make apps on native platforms that, through a communication layer, use JavaScript to do the heavy lifting, to do basically everything that you need for your app to be useful to your user. To just hammer home this point very quickly, while we’re at the start, when it comes to React Native, React Native, one of the key advantages that it has is that literally everything on screen that gets rendered, even if the code you write is JavaScript, at the end of the day, that’s going to be fully native code.
In this scenario, we have a super simple React Native app, but the nodes there in the VS Code debugger, they show you how it’s a UINavigationController, a UIViewController. It’s all native in the end, which is a key feature of React Native, of course. I was lying, this is actually the graph you’re going to need the most. It’s slightly more complicated, and you’ll see in a bit why I’ve added all these blocks. Basically, we do have native UI and native modules on one end, and we do have the JavaScript React Native UI code, the JavaScript modules, and the business logic. When you think of a React Native app, this is usually what you deal with. In a normal, standard, vanilla React Native project, this is what is called a greenfield app. This means that you’ve built your app from scratch using React Native. In this scenario, very easily, your developer just runs, or uses the JavaScript, and he codes, and then the user interacts with what is, from their perspective, a normal native app.
Out of the box, React Native comes for iOS and Android, so your user base is already doubled in a way with just one code base. If you’ve never used React Native before and you’re only interested in the mobile world, this is pretty much where I explain to you the way it works, the way it was working before, when I showed you the breakdown, is that basically what happens is that the communication layer has a JavaScript side of it, and an Apple, Android side of it that communicate with the iOS and Android UI and native modules. Those are the things that then end up in the app for the user. Already, by now, you can probably tell, if you’ve worked in the mobile space, that this could bring a significant advantage. In fact, React Native is really good at speeding up the time to market, and not just in this base scenario, as we’ll see in a bit.
In fact, this is where things start to get interesting. I was showing you earlier, like how we do have, in this communication layer, both a JavaScript side and the native platform side. Nothing is stopping us from expanding this JavaScript side to have even more native platforms. This is actually what we did. We did make, at Microsoft, React Native for macOS and Windows, because, of course, we are a desktop heavy company in some of our organizations, like mine, Office, or think about Windows. We saw the potential in having a JavaScript based solution to develop apps, and we decided to expand on it and use it ourselves. We’re not the only ones that are betting on React Native for this adaptability. In fact, there are so many platforms that React Native can support. Right now, there are two big pushes, one around visionOS and one around the TV platform that is backed by Amazon and Theodo, which is a consulting company here in UK. This is already introducing a certain level of adaptability. You have React Native and it’s not just for one platform. This is pretty interesting.
Let’s add another dimension to this adaptability. Earlier, when I was showing you this greenfield app, what I was saying is, at the end of the day, your user interacts with a native app on their native OS. When the user uses a normal, let’s say a Swift or a Kotlin app, what they do is just interact with the 100% native app built with 100% native UIs and native modules. Those two squares are very similar to what I was showing before. In fact, we could. It looks similar. What’s stopping us from doing this? This is actually what in the React Native space is called the brownfield app. This means when you have a preexisting native app and you inject React Native, you add the communication bridge, and you start writing some of the features, some of the experiences in React Native. That means that you can add, in your native app, some parts in React Native. We can even take this one step further. This graph is too complicated for now, so let’s just make it a bit simpler.
Now let’s enter a situation much more closer to what we have at Microsoft, where we have multiple apps. Let’s say that we have an app, let’s call it Outlook. Let’s take another one, let’s call it Office. If both these scenarios have already a bridge and we want to write some code in JavaScript to reuse in our native apps, nothing is stopping us from just writing it once, and then we deploy it on the different host apps. This is something that we do a lot at Microsoft, actually. This is a slide from an old talk, I would highly recommend if you’re interested in this specific topic of like, what I usually call the multi-monorepo world at Microsoft. You can go and check that out. Basically, we do have so many massive monorepos that even just moving the code and getting one host app, which is something like Outlook, Office, to consume an experience built in another monorepo, it requires some work, and that’s why we made some custom tooling for ourselves in the React Native space. This is all open source. There’s actually another talk that we’ve done where we go deeper into this tooling. We’ll see it again in a bit, some of it.
We can take it one more step further. As I was showing earlier, the brownfield situation is very similar for the end user to use a normal app, but for us, we’re adding a communication layer. We’re adding some JavaScript code, and that doesn’t come for free. Of course, there is some overhead that we’re introducing. For example, we might introduce an increase in bundle size. In particular, what this could lead to is a slower startup time, or some interactions might feel slower. React Native is actually so adaptable, that one of the things you can do is that you can replace the communication layer with something custom. We actually do have an implementation called react-native-host out in one of our open-source repos. In this scenario, it may look a bit weird. Why would I need it? Let me show you a slightly more complicated example. Let’s say that you have a very big app, and you have so many screens into it, and you’ve built two or three of them with React Native.
Maybe one of them is like five screens deep. One is two screens deep. The other one is like 10 screens. It’s a page that no one ever sees. If you don’t know when the user is going to use it, or each of these experiences, you basically have just one communication layer. You need to build it up at the start and then hopefully at some point it will become useful. If you have a custom communication layer, what you can do is tailor the behavior of this communication layer, for example, to only build up when you’re at the n minus screen right before that specific experience. You can build it up and build it down for each specific need. This is not even my final form. When I was showing you earlier that the business logic was separate, it was on purpose. Because when you write some code that interacts with the server, for example, and this is more relevant when maybe you have a big corporate app, when there are many backends you need to interact with for different services, and then a new feature comes in.
Basically, that business logic is pretty much standard JavaScript, TypeScript code. If we want to take something that we built very quickly for web, so we have the JavaScript code built for the web app, and we want to very quickly introduce it into our native platforms, but we’ve only built the business logic, what if we do this? We have actually found out at Microsoft that there is this special mode of using React Native that you can basically concat and call it headless. Pretty much this allows you to have your UI to be built fully native by maybe your app team that is working on that specific platform. They can directly interact with the code that the backend team has provided in JavaScript for the web app without having to rewrite it from scratch just for your app. At Microsoft, this looks much more like this. We have a brownfield scenario. We have the custom communication layer. Headless is one of the more interesting use cases of React Native, and it enables a faster time to market, especially when you have a range of apps that vary across a whole series of platforms.
A React Native App at Microsoft – Strategical Reasons
Also, I’ve shown you a bunch of different ways of using React Native. I’ve shown you like, this is uploading this way, this way, that way. At this point you may be like, but how does it look really like? How does a React Native app at Microsoft look like? The answer is, of course, it depends. We actually do have many apps that use React Native. This is just a subset. This is like the crown jewels. We have Office, Outlook teams, where we all use React Native in a brownfield way. We have Xbox and Skype, which are in a greenfield way. Not just that, we do have many experiences. For example, this is called the contact card or the live persona card, and it’s one piece of code that works across all these different platforms. This is Outlook. In a way, we both have React Native apps at Microsoft that look like this, standard greenfield apps. Both scenarios, they’re much more complicated, intercommunicating, they share the code, and some are more custom than others. Some use React Native more than others.
The great thing about React Native is really that both of these scenarios are valid React Native cases, and this really opens up a world of possibilities. We all know that technology is not what wins the hearts of our leadership. It’s not, I’ve made it 10 milliseconds faster, 100 milliseconds faster. They want to hear the strategical reasons. They want to hear the money talk. To talk about strategy, I wanted to start from something that happened a while back, around 2022. We had an internal conversation where basically a developer from a different team came into an engineering room and basically was like, why are we using React Native over this other technology? Surprisingly, the conversation stayed very civil. No one threw anything at each other. The more interesting bit for me is that it was a very good treasure trove of good strategical reasons. It was a great way for me to find a good bunch of good strategical reasons. One thing that you need to learn about Microsoft is that we love a three-letter acronym. If we can give it a three-letter acronym, we’re going to make it into a three-letter acronym. Here’s the first GSR.
The first one is actually hire-ability. Maybe some of you that are more familiar with the JavaScript space may have already seen this coming. There’s this really interesting quote from the McKinsey Technology Trends Outlook 2023, where they stated very clearly that a lack of talent is a top issue constraining growth, where talent here is, of course, having actual people that work for you. When it comes to React Native, it’s based on JavaScript, and it’s constant and consistently across any server that you can find in the space, JavaScript is at the top, is the biggest pool of developers. It literally opens up the door to the biggest pool of developers. Because it’s JavaScript, it also means that it’s actually faster to onboard people from different projects.
The second good strategical reason is flexibility. I already mentioned that brownfield is a very interesting way of using React Native. The really great thing about it is that it doesn’t come in one size. You can literally make it as much or as little part of your app as possible, compared to some other solutions for mobile, especially in cross-platform, that literally requires you to throw everything out the door, start from scratch with a new stack. React Native really allows you to test the waters, only introduce it for a bit, see if it works, and really be flexible in how you introduce in your code base and how much it does. The third one is that it’s alive, which, of course, I’m talking about a piece of code, but what I mean with that is that it’s very much alive as an open-source project. One of the other reasons why I didn’t want to go too much into the technical details of the graph and how React Native works, is because I would have gotten to this point and basically said, and now forget everything, because it’s going to change next month.
This is because of a project called the New Architecture, pretty much React Native. The team at Meta, in collaboration with us and other companies, we’ve, over the years, figured out that there are some pretty significant bottlenecks in the existing architecture of React Native. Meta has been doing a lot of work to create a new architecture under the hood for React Native, so that all of them are actually removed. There’s this new process, basically, we’re moving from a single bottleneck to an entire interface that allows for much more permeability between JavaScript and native. I don’t know about you, but React Native is from 2015, so it’s almost nine years old. How many open-source projects can really say that nine years in, they’re still super active. They’ve been actively worked on by the original owner. They’re doing significant work that is supposed to help everyone without being as disruptive as Angular 2.
Also, some other more interesting strategical reasons that maybe make more sense at the size of Microsoft is, for example, the cross-industry synergies. When I showed you earlier the showcase to show some of the apps, I didn’t mention the fact that some other big companies are involved in this space. In certain situations, in certain rooms, when we need to justify why we’re doing certain works, why we’re following certain efforts, the fact that we can leverage the fact that we’re working together with Meta, with Amazon, on certain efforts, or like, we’re doing this because we treat them as customers of some of the things we do, it really helps get buy-in from center and leaderships. This is a very good thing to keep in mind. Some big companies are involved in the space, you can get involved with them, and that may help you make your goals move forward on the win of this interconnectivity.
Then, of course, there’s the number five. This is the big one. This is probably also the trivial one. This is the one that we say about a lot of different projects, and most times we’re just like, “It’s open source. It’s better by default.” I don’t like that. I want to talk a bit about this. I want to talk about why something being open source makes it actually better, for strategical reasons. I’m actually going to talk about four very quick reasons for that. The first one is probably the one that will have leadership most interested. There was an interesting paper that come out in January of this year, from the Harvard Business School that went into trying to estimate the value of open source. One of the key findings that they have is actually that if all our companies stopped using open source, had to rewrite it from scratch internally, it’d cost three-and-a-half times more. First off, using open source is good because it saves you money.
The second reason why open source is important is actually that it helps you retain control over the technology that you’re using. I think we underestimate the value of the fork button over there. As you can see here, I do have my own because I’m a maintainer, but this is also the button that allows us to enable or to create React Native macOS. The fact that you can in some form or shape always go back to the code and own it at the level that you need if something goes wrong or if we want to move it in a different direction, is incredibly powerful. It also eliminates the situation where, if the owner decides to do something different with the tool, you are just at their mercy, or you’re stuck, or you need to move to an entirely different stack. The third reason is actually that React Native in particular has a big community. It’s part of the JavaScript ecosystem, which we’ve already mentioned is the biggest pool of developers.
It’s not surprising to see that we have some massive numbers. For example, I was trying to find out the better numbers to show how big this community is, but I think the fact that it gets downloaded weekly around 2 million times now on npm is a pretty significant number. The React Native directory website is a man-made list of all the libraries out there that are made specifically for React Native. Imagine just simply the fact that because you’re using React Native, you have access to almost 1400 libraries. On top of those, you can put all the libraries that are just vanilla JavaScript. They don’t need to be React Native specific, as long as their JavaScript is probably going to work. Being part of this big community really means a lot of tooling, a lot of resources. It is not something that you should underestimate.
Then, of course, this is more specific to the React Native one, because React Native is based on React, it allows for very interesting cross-company synergies. By that I mean that if my team, my org, we invest in React Native, and our sister team, our sister org, that maybe work on web, they invest on React, the fact that we both have this common denominator allows for broader investments and a bigger center of gravity around that technology in our company. For example, at Microsoft, we do have our Fluent UI library. Basically, Fluent UI is our design system. Fluent UI the GitHub repo is the web implementation, it uses React. The fact that different teams can, in a way, come together and be like, yes, React is a technology we like, we invest in, really helps the talk with leadership, which, on a strategical sense, is very powerful.
Then, there’s technically one more that I wanted to add. It’s cheating, though, so I’m only mentioning it in quotes, which is potentially using web code. One of the things I mentioned earlier is that you need JavaScript React Native UI code, JavaScript modules to do a React Native app that controls the native platform. Especially that JavaScript React Native UI specific, that’s sometimes a bit of a hiccup, something that not many teams want to deal with. Like, I already have my web React code, why doesn’t it just work? This is actually something that we as Microsoft and Meta, we’ve been thinking about it. We’ve been talking about it for a while. The idea is that, basically, we want to make it so React Native is going to use straight-up React code, and also, we can use WebAPI as ways to control native modules. Of course, this is still in the experimental phase. There is the react-strict-dom repository by Meta. There’s the WebAPIs work by us. Basically, we’re actively working on this. We really think that this is going to be the next big thing in the React Native space.
Tradeoffs
Of course, there are tradeoffs. You wouldn’t believe me if I just came here, and finished the talk saying, with the last line, it’s like, “Everything is perfect. Goodbye.” There are tradeoffs. We’re all engineers. We all know that if something is too good to be true, it’s probably fake. Let’s talk about a few big tradeoffs that come with React Native. The first one, it’s alive. It means that it’s still alive and invested on. It keeps moving. Because it keeps moving, it means that we need to keep up. For example, this is one of the biggest pain points, usually, when it comes to React Native, when you talk with people about React Native. When you need to upgrade between different versions of React Native, usually what you become very familiar with are these massive diffs. Like, these are all the things you need to change, to move from a version to the other. This is something we’ve been actively working on with Meta. I’ve been part of the release crew for many years. Also, we made some of the tooling. Earlier I mentioned this tool called React Native Test App, and this helps a lot with the upgrading process.
Another tradeoff is that, yes, I did mention that you can integrate it with existing projects, which is pretty cool, and it’s very powerful. A lot of the adaptability comes from there. That’s also the main ways where Meta is using it in the Facebook main app, in the Marketplace tab, precisely, through this integration with an existing process in brownfield mode. When it comes to existing projects, you’re adding overhead. To dive a bit more into that, there are basically two main ways the overhead is added. One is having this communication layer. You are adding chunkiness to your app. You’re adding some time to the startup process.
Of course, because you’re adding JavaScript, we all know JavaScript bundles can be pretty massive, so that’s another area that you need to take into consideration. You’re like, do I want to add it in? Can I take the tradeoff? To help you with that, we did already mention the custom layers. That’s a way. Of course, that requires maybe some internal tinkering and engineering. There’s also the new architecture. That part of the problem is being addressed. For the JavaScript bundle, we actually do have, in the monorepo I mentioned earlier, one of the tools, which is literally a tree shaker, that helps a lot with that. Things have tradeoffs, but fixes are in the work.
When it comes to integrating with an existing project, there is actually a bigger issue, especially when you need to consider a very big scenario. Or you’re in a big company, you’re trying to convince many people to integrate React Native into your project, because, at the end of the day, what you’re doing is disrupting an equilibrium. If you’re in the React Native space, if you have heard of React Native, probably the biggest example I can give you of this is from the infamous Sunsetting React Native blog post series by Airbnb. If you go and read through them, basically, Airbnb was very invested in React Native. At one point in 2018 they were like, no, didn’t work out. Bye.
The problem is that for a lot of people, it literally became, React Native is dead. That was 2018. I’m still here in 2024, so I’m living proof that that wasn’t true. If you actually go and read the blog post, I think that one of the main takeaways is literally this disruption of the equilibrium. You cannot just go to an iOS engineer, to a Kotlin engineer and just shove JavaScript down their throat, and be like, be happy with it. You really need to be careful in how you approach it, in how much agreement, in how much buy-in from leadership you need to have when it comes to integrating something like React Native to preexisting solutions.
Recap
When it comes to adaptability, thanks to its architecture that can be modified and adapted in a lot of different ways, React Native is very powerful. It allows new platforms, new configuration, and can mix and match. You can add it as much or as little as you want. Because it’s JavaScript based, you can easily share code and tooling across the code base, think something like ESLint, Prettier, like all these toolings that all your JavaScript developers are already familiar, they can still use those, even if they target different platforms or web. I’m talking about JavaScript, of course, when we move to the strategy bit, because it’s JavaScript based, it means the biggest pool when hiring. The flexibility allows for tailored and incremental usage. You don’t need to get the all or nothing approach. You don’t need to get massive sign-in. Maybe you can just start by like, that page that is 10 screens deep, let’s try to just do it there. You can really take your time with introducing it and making sure it works out for you. The project is still very much alive.
You don’t need to worry that it’s going to disappear tomorrow. It’s not going to be on the killed by Google Twitter account. It allows for cross-industry and cross-company synergies, which when you need to get buy-in, when you are in big companies, those type of pros are really important when you want to keep your team and everyone happy. Of course, it’s open source. Let’s go through the tradeoffs. You need to keep up. It adds some actual technical overheads, so you need to work through those. Some of the things are already addressed. It can lead to disrupting equilibriums. That’s probably where most projects will fail, in introducing React Native in using it. Be careful with that. React Native is incredibly adaptable. Good strategical reasons. Has tradeoffs. Again, please take all the things I’ve said and adapt them to your situation, the next time you go into a decision process, the next time you have an architecture table. Adapt this to your situation. Don’t just blindly, it’s used by Meta, it’s used by Microsoft. It’s going to be good for us. Play it careful.
Questions and Answers
Participant 1: You said one of the advantages of React Native is that it’s alive. What’s the relationship to the parent React project? Because that seems to be somewhat on a soul-searching mission, which hasn’t had a release in, I think, nearly two years. What’s the relationship between React Native and the somewhat floundering React project?
Sciandra: I think that one thing to clarify is that, at Meta, in terms of organizational structure, React and React Native are part of the same org. Literally, the entire org is called React Org. React Native, in that sense, is very much part of the project. As you may know, internally at Meta, there’s one massive monorepo code base where everything lives together. Of course, I’m from Microsoft, I can talk about some of the bits of how Meta works, but I cannot dive in too deep. React is technically alive. I’m pretty sure that React 19 is going to be out soon. There’s going to be React Conf. I expect a lot of announcements to happen there. React is a much smaller project in a way. It’s just a web library in a certain way.
React Native, by comparison, it needs to keep up much more with what’s happening around it, with Android and iOS. In that sense, I wouldn’t take the different release rhythm, to be a significant signal. That’s just in the nature of the two projects. They are very much interconnected. The people that work on one are very much close to the people that work on the other. There was a talk from the person from the Oculus experiences team, and that’s also another platform where React, React Native are used heavily in conjunction. In a way, as long as one of the two projects has a spark, you can assume that both have the spark, because it’s basically the same people working on both.
Participant 1: When I looked at the React Native support for desktop a year, two years back, there was a fork repo of the React Native things, and if you wanted to build a mobile app that also works on desktop, you physically had to build two different applications. Because there was the official React for mobile, and then there was Microsoft fork for desktop, which is different with some of your competitors. Is that still the case that you have to physically build two different applications?
Sciandra: No. The repositories are still separate. There’s the React Native Facebook/ React Native code base, which is iOS and Android. There’s the Microsoft React Native macOS and React Native Windows. Those are three separate code bases. I think of maybe what you’re referring to is like the developer experience when you build the app, or the actual code base. I think, at this point in time, the flow is that you generate a React Native app, and then you run an extra command that adds the extra folder. You generate the React Native project, and then you go into the folder and you run React Native Windows in it, or something like that, and that adds the Windows implementation, so all the native bits that you need, the Windows folder. That’s pretty much it, as far as I’m aware. I don’t use Flutter, but basically, my understanding is that you have the code base with just the separate folders all living side by side.
Participant 2: What is the performance implication of using the native code, doing Android and Java, or whatever, Kotlin, versus doing React Native? You did not talk about performance at all.
Sciandra: If I have a fully Kotlin based Android app, how it does performance wise compared to React Native? What we’ve seen at Microsoft in all our scenarios is that the performance is not significantly different. Even if there is some small overhead, in the end, when it comes to the final user experience, the interactivity difference is minimal. Of course, for example, we don’t try to use React Native for very complicated, computationally heavy situations. We don’t try to render a 3D object in visionOS or something. The use cases are much more like the live persona card I was showing earlier. For those scenarios, it’s pretty much the same, in particular when it comes to the user. Some small benchmarks might be different, but at the end user, the milliseconds of responsiveness are in the same hundreds, so 200, 220, something like that.
Participant 3: For example, you were showing that some of the projects you have, 10% of the project is React Native, 90% is native. Is this because 10% of it can be made in React Native and the rest is too computational?
Sciandra: It’s just that it was a preexisting code base. If we already have Outlook built, and we just want to add, when you click on the avatar, a new thing comes out, and it shows you some details. What happens is that we don’t throw away everything that was built before. It’s just there, and we’re just like, let’s add this new part in. Instead of building it natively, let’s take it from that one code base where we’ve already built it for the other platforms, and let’s just put it in.
Sciandra: It depends on the feature, on the team. Again, some of the things I mentioned are like these non-technical reasons. At our size, most of the decisions are not very technical, are much more around which team has the signoffs to do certain things. It almost always boils much more down to those type of like, which leadership has agreed to do what in which team, and based on that, that gets reflected on what code ends up in the code base. There isn’t a master plan to get every single app everywhere to be built in React Native. At Microsoft, we don’t use it for all the apps that we’ve done. We use it in quite a bit at different levels, but it’s very much a situation of like, do we have this code already? If we use React Native, does it mean that we are faster to market with that feature?
For example, Copilot is a big example of that. One of the reasons why we were able to get Copilot so quickly across all our native apps is because we had the web implementation and through some of the things I mentioned earlier, basically that made the process much quicker, because we had the web code and we were like, go, go, go. Let’s just wire it up as quickly as we can, and give it to our users.
Participant 2: Since you are from Microsoft, how do you choose between .NET MAUI, which is a new multi-platform, again, you can build multiple apps with that, versus React Native?
Sciandra: It’s almost always a situation of which code base we’re looking at, which pool of developers we’re looking at. For example, one of the key difference between React Native and .NET MAUI is that .NET MAUI is much more for C++ developers, like that silo of developers and technologies. It very much depends on what code base this work needs to be done on, what expertise does the team have? For example, again, if we’ve built the live persona card in JavaScript for web, perfect. We need something JavaScript if we want to go fast, so React Native makes more sense.
I’m talking as a theoretical example, let’s say Office Word creates in C++ a special feature, and we want to throw it over to Excel, PowerPoint, or even Outlook, then in those scenarios, maybe something like .NET MAUI could make sense, or just straight C++ sharing code. It really depends. It’s very much feature first. We’re trying to get value to our customers as quickly as we can, in the best way possible. One of the things that we believe is that the experience on a native app needs to be compliant to that native platform. That’s why we try to use React Native instead of WebViews. Even that, we have a lot of things that are built into WebViews because of the decision-making process: who is involved, who’s signing off.
Participant 4: We use a lot of WebViews in our mobile app, and we use React web to populate those WebViews. It’s really interesting to see the quite extensive use of React Native. We’ve run some experiments with React Native before, we’ve actually taken out of the code base. I just wondered, was it worth us having another go? Where do you see the tradeoffs between React web in a WebView, versus trying to run some more experiments with React Native?
Sciandra: It depends. What are you trying to optimize for, for example? One of the things I said at the very top is that, by using React Native the native OS knows exactly what’s happening in the app. It knows every single component. If you have something like a WebView, what the native OS knows is that, now I’m throwing a WebView, I have no clue what’s happening in there. That’s an example. That’s one of the situations where, for example, this thing could lead the OS to optimize your app in a different way. Because it has no concept of which screen you’re showing. Are you showing three screens, are you showing one? In a WebView, you don’t really know. It really depends on what you’re trying to optimize for. If you just want, I have my web app, I really don’t want to spend any more engineering time and just want to give my users something to install on their phone, WebView can make sense, of course.
Participant 4: Potentially there’s a strategy to do both depending on the use case.
Sciandra: There’s also the React Native WebView. Probably you’re using a straight WebView. There are different options. Again, WebView makes sense in certain scenarios. It all depends on what you’re trying to optimize for.
Participant 5: Why React Native and why not something like NativeScript, which would maybe not have to tie you as much to React specifically?
Sciandra: First off, because, as I was saying earlier, if we all work in the React space, we can create cross-company synergies. There are some strategical reasons there. The NativeScript case is actually pretty peculiar. Some of the people that work in the project are heavily interacting with our team, because some of the solutions that we developed around, for example, the JavaScript engine that React Native users harness, are now being used by NativeScript to do some really interesting things. I work constantly with Jamie Birch and Nathan Walker and the team, we talk frequently. It’s pretty interesting to see how these two projects that some people may perceive as competitors, we’re actually helping each other out a lot.
For our needs, I think that what happened is that when we started looking into React Native, NativeScript wasn’t as powerful or as polished as it is today. I think now it’s a much more stronger competitor than some other alternatives, especially when you want to go for a JavaScript based solution. I think for our needs, our investments, and the platforms that we target, React Native still makes more sense. It could also be a dynamic inertia. We’ve invested so much that we are not maybe going to pivot everything all at once. The two projects are collaborating very closely.
Participant 6: Are there any scenarios where you want to go the other way round. I see that you have to bite that brownfield application, maybe you want to make 5% native instead of supporting and using React Native.
Sciandra: If I have something done in React Native and I want to rebuild it in native? I have not heard of any scenario where we’ve done that so far, but I can see situations where, for example, let’s say there is a reshuffling of the organization, you have a new leadership and the new leadership has cut the web team because they believe more in the native apps than the web teams or the web app. In those type of situations, I can totally see a native team having to take over a preexisting brownfield scenario. In that case, if your native team is Swift only, it makes sense to transition off React Native in favor of going back full native, which is basically what Airbnb has been doing. As soon as the people that were advocating the most for the brownfield approach left and the expertise left, what was still at the company were the native engineers, so, of course, you want to rebuild so that you use the best as you can, the engineering that you have, basically.
See more presentations with transcripts
MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ
By focusing on the users, platform development teams can ensure that they build a platform that tackles the true needs of developers, Ana Petkovska said at QCon London. In her talk, Delight Your Developers with User-Centric Platforms & Practices, she shared what their Developer Experience (DevEx) group looks like and what products and services they provide.
Platform development teams should be user-centric across all stages of platform development to understand the true needs of the developers before building the platform, Petkovska said. They should keep close contact with their developers to foster platform adoption and usage, as well as to offer support when issues arise.
The goal is to design a platform that is easy to use and on-board. To establish this, Petkovska suggested prioritising future platform development based on user feedback.
As shown in the Accelerate State of DevOps 2023 report, besides improving the user experience, the strong user focus also brings benefits for the platform development teams: it boosts the performance of the teams and leads to higher job satisfaction, Petkovska mentioned.
Petkovska mentioned that they initially started with one team focused on improving the developer experience and productivity through improving the CI/CD tools, as well as the deployment and release processes for the product development teams. Later, they formed their DevEx group by adding another team that was focused on infrastructure development and release. Finally, they formed a team that owns the data infrastructure, and shares tools and best practices with the development teams for using it, as Petkovska explained:
All teams provide internal development platforms, tools, and services to ease the product creation of the development teams, while also improving their productivity and experience.
By providing self-service platforms for different needs (for access rights management, repository management, build tools configuration, etc.), we are enabling the developers to autonomously obtain what they need and when they need it, Petkovska said. This also improves the productivity of the organisation as a whole because the developers can focus more easily on delivering value. In general, platform development teams focused on DevEx have a multiplicative productivity effect:
With the time that they spend, they improve the productivity of all the product developers that they serve.
InfoQ interviewed Ana Petkovska about developing and providing user-centric platforms.
InfoQ: What’s your strategy for treating your platform as a product?
Ana Petkovska: When we think about our platform as a software product that we “sell” internally to our engineering organisation, we can correlate easily to a normal software company and apply similar practices.
- We recognise the product developers as users of our platform. This allows us to tailor our platform around the real needs of our developers.
- We treat our platform as a product, and we share the product manager (PM) role among the managers of the technical platform group and the technical leads of the team.
- We maintain and publish a roadmap with the next projects for each team. Thus, anyone in the Engineering organisation can understand what we have achieved in the past and what we are planning to do next.
- We have technical previews for our platforms with pilot teams as early adopters, before we open the usage to all product teams.
- We prioritise the development of the platform depending on the feedback that we get from the users. For example, during the on-boarding and adoption, some teams might highlight missing features that impede the developers from adopting the platform or decrease their user experience.
- We have well-defined communication channels with our users in order to promote the usage of the platform, ease adoption, and provide support.
InfoQ: How do you communicate with the users of your platform?
Petkovska: We have several ways of engaging with the developers, depending on the type and purpose of the engagement.
- We have weekly meetings called DevEx Connect, where we inform the teams about important changes that our teams are bringing. They are particularly useful when we need adoption of the platforms that our team has built: to inform the developers of the progress and significant changes, to answer questions, and to get feedback.
- For any issue during the daily work, we have a dedicated Jira board where the developers can open support tickets for the tools and services that the platform teams own.
- For everything that our platform development teams own, we write documentation, and provide guides and examples on how the platforms can be used.
- For bigger changes, like offering and migrating to a new CI/CD tool, we also organise workshops and trainings for the product development teams.
- We have dedicated communication channels for the adoption of big changes. These are particularly useful because it creates a community and the product developers can help their peers by sharing their experience from using our platforms.
- For simple queries, they can also contact us via a public communication channel.
InfoQ: What’s your advice to platform teams for enhancing the developer experience in their company?
Petkovska: Focus on user-centric platforms and practices to improve the developer productivity and experience of your product and platform teams.
- Start by ensuring that you have well-established platform teams that are set to bring the DevEx improvements.
- Be intentional when building self-service platforms and treat them as a product. Ensure that you understand the users’ needs when building them.
- Put a strong focus on communication with your users, as well as educating them on how to adopt the platform.
B. Metzler seel. Sohn & Co. Holding AG Makes New $4.37 Million Investment in MongoDB …
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
B. Metzler seel. Sohn & Co. Holding AG bought a new stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) during the third quarter, according to the company in its most recent Form 13F filing with the Securities & Exchange Commission. The fund bought 16,150 shares of the company’s stock, valued at approximately $4,366,000.
A number of other large investors also recently added to or reduced their stakes in MDB. Swedbank AB grew its stake in shares of MongoDB by 156.3% in the second quarter. Swedbank AB now owns 656,993 shares of the company’s stock valued at $164,222,000 after buying an additional 400,705 shares in the last quarter. Thrivent Financial for Lutherans grew its position in MongoDB by 1,098.1% in the 2nd quarter. Thrivent Financial for Lutherans now owns 424,402 shares of the company’s stock valued at $106,084,000 after acquiring an additional 388,979 shares in the last quarter. Blair William & Co. IL grew its position in MongoDB by 16.4% in the 2nd quarter. Blair William & Co. IL now owns 315,830 shares of the company’s stock valued at $78,945,000 after acquiring an additional 44,608 shares in the last quarter. Fiera Capital Corp increased its stake in MongoDB by 1.5% during the second quarter. Fiera Capital Corp now owns 231,915 shares of the company’s stock worth $57,969,000 after purchasing an additional 3,525 shares during the period. Finally, Swiss National Bank lifted its position in shares of MongoDB by 1.1% during the third quarter. Swiss National Bank now owns 217,700 shares of the company’s stock worth $58,855,000 after purchasing an additional 2,300 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.
Insider Activity at MongoDB
In other MongoDB news, CFO Michael Lawrence Gordon sold 5,000 shares of the company’s stock in a transaction that occurred on Monday, October 14th. The stock was sold at an average price of $290.31, for a total value of $1,451,550.00. Following the completion of the transaction, the chief financial officer now owns 80,307 shares in the company, valued at $23,313,925.17. This represents a 5.86 % decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, Director Dwight A. Merriman sold 1,319 shares of the business’s stock in a transaction on Friday, November 15th. The stock was sold at an average price of $285.92, for a total transaction of $377,128.48. Following the transaction, the director now directly owns 87,744 shares in the company, valued at approximately $25,087,764.48. The trade was a 1.48 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 25,600 shares of company stock valued at $7,034,249. 3.60% of the stock is owned by corporate insiders.
MongoDB Stock Performance
MDB stock opened at $281.76 on Thursday. MongoDB, Inc. has a 52 week low of $212.74 and a 52 week high of $509.62. The company has a debt-to-equity ratio of 0.84, a current ratio of 5.03 and a quick ratio of 5.03. The company’s 50-day moving average price is $278.01 and its 200 day moving average price is $272.52.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings data on Thursday, August 29th. The company reported $0.70 EPS for the quarter, beating analysts’ consensus estimates of $0.49 by $0.21. The firm had revenue of $478.11 million for the quarter, compared to analysts’ expectations of $465.03 million. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The firm’s revenue was up 12.8% on a year-over-year basis. During the same period in the prior year, the company earned ($0.63) EPS. On average, equities analysts expect that MongoDB, Inc. will post -2.39 EPS for the current fiscal year.
Wall Street Analyst Weigh In
A number of equities research analysts have issued reports on the stock. Truist Financial increased their target price on shares of MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a report on Friday, August 30th. Needham & Company LLC raised their price target on MongoDB from $290.00 to $335.00 and gave the company a “buy” rating in a research note on Friday, August 30th. Mizuho lifted their price target on MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a report on Friday, August 30th. UBS Group increased their price objective on MongoDB from $250.00 to $275.00 and gave the company a “neutral” rating in a report on Friday, August 30th. Finally, Bank of America boosted their target price on MongoDB from $300.00 to $350.00 and gave the stock a “buy” rating in a research note on Friday, August 30th. One research analyst has rated the stock with a sell rating, five have issued a hold rating, nineteen have given a buy rating and one has issued a strong buy rating to the stock. According to MarketBeat.com, MongoDB has a consensus rating of “Moderate Buy” and an average target price of $336.54.
Read Our Latest Stock Report on MongoDB
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Read More
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.
Click the link below and we’ll send you MarketBeat’s guide to investing in electric vehicle technologies (EV) and which EV stocks show the most promise.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
GSA Capital Partners LLP boosted its stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 38.0% during the 3rd quarter, according to its most recent Form 13F filing with the Securities and Exchange Commission (SEC). The institutional investor owned 1,598 shares of the company’s stock after acquiring an additional 440 shares during the period. GSA Capital Partners LLP’s holdings in MongoDB were worth $432,000 at the end of the most recent reporting period.
A number of other institutional investors and hedge funds also recently modified their holdings of the business. MFA Wealth Advisors LLC purchased a new position in MongoDB during the 2nd quarter valued at about $25,000. J.Safra Asset Management Corp grew its position in MongoDB by 682.4% in the second quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock valued at $33,000 after acquiring an additional 116 shares during the period. Quarry LP grew its position in MongoDB by 2,580.0% in the second quarter. Quarry LP now owns 134 shares of the company’s stock valued at $33,000 after acquiring an additional 129 shares during the period. Hantz Financial Services Inc. bought a new stake in shares of MongoDB during the second quarter worth $35,000. Finally, GAMMA Investing LLC boosted its holdings in MongoDB by 178.8% in the third quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock valued at $39,000 after acquiring an additional 93 shares during the last quarter. 89.29% of the stock is owned by institutional investors and hedge funds.
MongoDB Stock Performance
Shares of MongoDB stock opened at $281.76 on Thursday. The firm has a market cap of $20.81 billion, a P/E ratio of -95.74 and a beta of 1.15. The company has a current ratio of 5.03, a quick ratio of 5.03 and a debt-to-equity ratio of 0.84. MongoDB, Inc. has a 12 month low of $212.74 and a 12 month high of $509.62. The stock’s fifty day moving average price is $278.01 and its 200-day moving average price is $272.52.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its earnings results on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The company had revenue of $478.11 million for the quarter, compared to analyst estimates of $465.03 million. During the same quarter in the previous year, the firm earned ($0.63) earnings per share. The firm’s quarterly revenue was up 12.8% compared to the same quarter last year. On average, equities research analysts expect that MongoDB, Inc. will post -2.39 earnings per share for the current fiscal year.
Insider Buying and Selling
In other news, CFO Michael Lawrence Gordon sold 5,000 shares of the business’s stock in a transaction that occurred on Monday, October 14th. The stock was sold at an average price of $290.31, for a total value of $1,451,550.00. Following the completion of the sale, the chief financial officer now directly owns 80,307 shares in the company, valued at $23,313,925.17. The trade was a 5.86 % decrease in their ownership of the stock. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is available through this link. Also, Director Dwight A. Merriman sold 1,319 shares of the business’s stock in a transaction that occurred on Friday, November 15th. The shares were sold at an average price of $285.92, for a total value of $377,128.48. Following the completion of the sale, the director now owns 87,744 shares of the company’s stock, valued at $25,087,764.48. This trade represents a 1.48 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 25,600 shares of company stock valued at $7,034,249 over the last quarter. 3.60% of the stock is currently owned by company insiders.
Analyst Upgrades and Downgrades
Several equities analysts have weighed in on MDB shares. Bank of America increased their target price on MongoDB from $300.00 to $350.00 and gave the company a “buy” rating in a research note on Friday, August 30th. Truist Financial lifted their target price on MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a research note on Friday, August 30th. Oppenheimer lifted their price target on MongoDB from $300.00 to $350.00 and gave the stock an “outperform” rating in a report on Friday, August 30th. JMP Securities reaffirmed a “market outperform” rating and issued a $380.00 price objective on shares of MongoDB in a report on Friday, August 30th. Finally, Piper Sandler lifted their price objective on MongoDB from $300.00 to $335.00 and gave the company an “overweight” rating in a report on Friday, August 30th. One research analyst has rated the stock with a sell rating, five have assigned a hold rating, nineteen have assigned a buy rating and one has assigned a strong buy rating to the stock. Based on data from MarketBeat, MongoDB currently has an average rating of “Moderate Buy” and an average target price of $336.54.
Check Out Our Latest Stock Report on MDB
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Articles
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB announced an expanded collaboration with Microsoft, introducing three new capabilities for joint customers. These innovations aim to enhance the development of AI-driven applications, improve real-time analytics, and provide greater flexibility in deploying MongoDB across various environments.
Enhancing LLMs with Proprietary Data: Through the integration of MongoDB Atlas with Microsoft Azure AI Foundry, businesses can now develop retrieval-augmented generation (RAG) applications by combining enterprise data stored in MongoDB Atlas with the advanced capabilities of Azure OpenAI Service. This integration enables the creation of custom chatbots, copilots, and internal applications, allowing developers to use MongoDB Atlas as a vector data store without additional coding or pipeline setup. Azure AI Foundry’s “Chat Playground” further facilitates testing and refining these applications before production.
Real-Time Business Insights: The new Open Mirroring feature in Microsoft Fabric for MongoDB Atlas enables near real-time data synchronization between MongoDB Atlas and OneLake in Microsoft Fabric. This connection allows businesses to generate analytics, AI predictions, and business intelligence reports without managing complex data replication processes, leveraging the strengths of both platforms seamlessly.
Flexible Deployment Options: The launch of MongoDB Enterprise Advanced (EA) on Azure Marketplace for Azure Arc-enabled Kubernetes applications provides organizations with greater deployment flexibility. Customers can now deploy and self-manage MongoDB instances across on-premises, hybrid, and multi-cloud environments. The integration of MongoDB EA with Azure Arc enhances the management, scalability, and resilience of critical workloads across distributed Kubernetes clusters.
Alan Chhabra, Executive Vice President of Partners, MongoDB
We frequently hear from MongoDB’s customers and partners that they’re looking for the best way to build AI applications, using the latest models and tools. Now, with the MongoDB Atlas integration with Azure AI Foundry and Open Mirroring in Microsoft Fabric, customers can seamlessly sync data and power AI applications with their own data. Combining the best from Microsoft with the best from MongoDB will help developers push applications even further.
Dan Farner, Vice President of Product Development, Trimble
As an early tester of the new integrations, Trimble views MongoDB Atlas as a premier choice for our data and vector storage. Building RAG architectures for our customers requires powerful tools that enable the storage and querying of large data collections and AI models in near real-time. We’re excited to build on MongoDB and leverage its integrations with Microsoft to accelerate our ML offerings in the construction space.
Kolby Kappes, Vice President – Emerging Technology, Eliassen Group
We’ve witnessed the incredible impact MongoDB Atlas has had on our customers’ businesses and have been equally impressed by Microsoft Azure AI Foundry’s capabilities. Now that these platforms are integrated, we’re excited to combine their strengths to build AI solutions our customers will love.
Sandy Gupta, Vice President, Partner Development ISV, Microsoft
By integrating MongoDB Atlas with Microsoft Azure’s powerful AI and data analytics tools, we empower our customers to build modern AI applications with unparalleled flexibility and efficiency. This collaboration ensures seamless data synchronization, real-time analytics, and robust application development across multi-cloud and hybrid environments.
Article originally posted on mongodb google news. Visit mongodb google news
ASP.NET Core 9: Enhancements in Static Asset Handling, Blazor, SignalR, and OpenAPI Support
MMS • Robert Krzaczynski
Article originally posted on InfoQ. Visit InfoQ
Microsoft has released .NET 9, which contains features regarding ASP.NET Core 9. These features improve performance, streamline development, and extend the framework’s capabilities. This latest release focuses on optimizing static asset handling, refining Blazor’s component interaction, enhancing SignalR’s observability and performance, and streamlining API documentation through built-in OpenAPI support.
Microsoft has improved Blazor with several updates to enhance component interaction and rendering capabilities. One of them is a new runtime API that enables developers to query component states. This functionality allows them to determine the execution status of a component and whether it is running interactively. As a result, this aids in optimizing performance and troubleshooting issues more effectively. Additionally, the update introduces the [ExcludeFromInteractiveRouting]
attribute, which facilitates static server-side rendering (SSR) for specific pages. This feature is particularly beneficial for pages that require traditional HTTP cycles instead of interactive rendering.
The next addition is MapStaticAssets
, designed to optimize the delivery of static resources. This feature automates tasks like compression, caching, and versioning, which previously required manual configuration. It integrates build and publish-time metadata to better serve static assets, supporting frameworks such as Blazor, Razor Pages, and MVC. While MapStaticAssets works well for assets managed by the app, UseStaticFiles
is still available for handling external or dynamic resources. Developers can implement MapStaticAssets with ease:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddRazorPages();
var app = builder.Build();
app.UseHttpsRedirection();
app.MapStaticAssets();
SignalR now supports polymorphic hub method arguments, enabling methods to accept base classes for dynamically handling derived types. In addition, activity tracking in SignalR has been improved with detailed event generation for each hub method. Furthermore, support for trimming and Native AOT (Ahead-of-Time) compilation has been introduced, which decreases application size and enhances performance for both clients and servers.
Moreover, Minimal APIs are enhanced by new tools designed for error handling. Developers can now utilize TypedResults
to return strongly typed responses, including the InternalServerError
status for HTTP 500 errors:
var app = WebApplication.Create();
app.MapGet("/", () => TypedResults.InternalServerError("An error occurred."));
app.Run();
Another feature in this release is built-in OpenAPI document generation, made possible through the Microsoft.AspNetCore.OpenApi
package. Developers can now generate OpenAPI documents for controller-based and minimal APIs with basic configuration. This provides a simpler alternative to SwaggerGen, though it does not include a UI component like SwaggerUI, which was removed from .NET 9 templates. Developers can still manually add SwaggerUI if required.
On Reddit, user GaussZ clarified the implications of this change:
The OpenAPI support is not replacing SwaggerUI; it explicitly comes without any UI part. It only replaces the SwaggerGen part. You can still use the SwaggerUI though.
Updates have been made to authentication and authorization, now incorporating support for Pushed Authorization Requests (PAR) within OpenID Connect workflows. Developers can customize parameters in OAuth and OpenID Connect handlers, allowing for better control over authentication processes.
Further details on the new features can be found in the release note.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Artificial intelligence is the greatest investment opportunity of our lifetime. The time to invest in groundbreaking AI is now, and this stock is a steal!
The whispers are turning into roars.
Artificial intelligence isn’t science fiction anymore.
It’s the revolution reshaping every industry on the planet.
From driverless cars to medical breakthroughs, AI is on the cusp of a global explosion, and savvy investors stand to reap the rewards.
Here’s why this is the prime moment to jump on the AI bandwagon:
Exponential Growth on the Horizon: Forget linear growth – AI is poised for a hockey stick trajectory.
Imagine every sector, from healthcare to finance, infused with superhuman intelligence.
We’re talking disease prediction, hyper-personalized marketing, and automated logistics that streamline everything.
This isn’t a maybe – it’s an inevitability.
Early investors will be the ones positioned to ride the wave of this technological tsunami.
Ground Floor Opportunity: Remember the early days of the internet?
Those who saw the potential of tech giants back then are sitting pretty today.
AI is at a similar inflection point.
We’re not talking about established players – we’re talking about nimble startups with groundbreaking ideas and the potential to become the next Google or Amazon.
This is your chance to get in before the rockets take off!
Disruption is the New Name of the Game: Let’s face it, complacency breeds stagnation.
AI is the ultimate disruptor, and it’s shaking the foundations of traditional industries.
The companies that embrace AI will thrive, while the dinosaurs clinging to outdated methods will be left in the dust.
As an investor, you want to be on the side of the winners, and AI is the winning ticket.
The Talent Pool is Overflowing: The world’s brightest minds are flocking to AI.
From computer scientists to mathematicians, the next generation of innovators is pouring its energy into this field.
This influx of talent guarantees a constant stream of groundbreaking ideas and rapid advancements.
By investing in AI, you’re essentially backing the future.
The future is powered by artificial intelligence, and the time to invest is NOW.
Don’t be a spectator in this technological revolution.
Dive into the AI gold rush and watch your portfolio soar alongside the brightest minds of our generation.
This isn’t just about making money – it’s about being part of the future.
So, buckle up and get ready for the ride of your investment life!
Act Now and Unlock a Potential 10,000% Return: This AI Stock is a Diamond in the Rough (But Our Help is Key!)
The AI revolution is upon us, and savvy investors stand to make a fortune.
But with so many choices, how do you find the hidden gem – the company poised for explosive growth?
That’s where our expertise comes in.
We’ve got the answer, but there’s a twist…
Imagine an AI company so groundbreaking, so far ahead of the curve, that even if its stock price quadrupled today, it would still be considered ridiculously cheap.
That’s the potential you’re looking at. This isn’t just about a decent return – we’re talking about a 10,000% gain over the next decade!
Our research team has identified a hidden gem – an AI company with cutting-edge technology, massive potential, and a current stock price that screams opportunity.
This company boasts the most advanced technology in the AI sector, putting them leagues ahead of competitors.
It’s like having a race car on a go-kart track.
They have a strong possibility of cornering entire markets, becoming the undisputed leader in their field.
Here’s the catch (it’s a good one): To uncover this sleeping giant, you’ll need our exclusive intel.
We want to make sure none of our valued readers miss out on this groundbreaking opportunity!
That’s why we’re slashing the price of our Premium Readership Newsletter by a whopping 70%.
For a ridiculously low price of just $29, you can unlock a year’s worth of in-depth investment research and exclusive insights – that’s less than a single restaurant meal!
Here’s why this is a deal you can’t afford to pass up:
• Access to our Detailed Report on this Game-Changing AI Stock: Our in-depth report dives deep into our #1 AI stock’s groundbreaking technology and massive growth potential.
• 11 New Issues of Our Premium Readership Newsletter: You will also receive 11 new issues and at least one new stock pick per month from our monthly newsletter’s portfolio over the next 12 months. These stocks are handpicked by our research director, Dr. Inan Dogan.
• One free upcoming issue of our 70+ page Quarterly Newsletter: A value of $149
• Bonus Reports: Premium access to members-only fund manager video interviews
• Ad-Free Browsing: Enjoy a year of investment research free from distracting banner and pop-up ads, allowing you to focus on uncovering the next big opportunity.
• 30-Day Money-Back Guarantee: If you’re not absolutely satisfied with our service, we’ll provide a full refund within 30 days, no questions asked.
Space is Limited! Only 1000 spots are available for this exclusive offer. Don’t let this chance slip away – subscribe to our Premium Readership Newsletter today and unlock the potential for a life-changing investment.
Here’s what to do next:
1. Head over to our website and subscribe to our Premium Readership Newsletter for just $29.
2. Enjoy a year of ad-free browsing, exclusive access to our in-depth report on the revolutionary AI company, and the upcoming issues of our Premium Readership Newsletter over the next 12 months.
3. Sit back, relax, and know that you’re backed by our ironclad 30-day money-back guarantee.
Don’t miss out on this incredible opportunity! Subscribe now and take control of your AI investment future!
No worries about auto-renewals! Our 30-Day Money-Back Guarantee applies whether you’re joining us for the first time or renewing your subscription a year later!
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
➤ Nosql Software Market Overview:
The Nosql Software Market Industry is expected to grow from 11.43(USD Billion) in 2024 to 40.3 (USD Billion) by 2032. The Nosql Software Market CAGR (growth rate) is expected to be around 17.06% during the forecast period (2024 – 2032). The NoSQL software market is witnessing exponential growth as businesses seek scalable and flexible database solutions to manage unstructured data. NoSQL databases, unlike traditional relational databases, offer enhanced performance, scalability, and adaptability, making them ideal for cloud applications, big data analytics, and IoT. Their ability to handle diverse data formats has been a major driver for industries such as e-commerce, healthcare, and finance.
Browse a Full Report (Including Full TOC, List of Tables & Figures, Chart) –
https://www.wiseguyreports.com/reports/nosql-software-market
Adoption of NoSQL solutions is driven by the increasing demand for real-time data processing and analytics. Enterprises are prioritizing solutions that can accommodate vast volumes of data without compromising speed. The market is also fueled by the proliferation of social media, video streaming, and e-commerce platforms, which generate large datasets. These factors collectively underscore the critical role of NoSQL software in contemporary digital transformation efforts.
➤ Market Segmentation:
The NoSQL software market is broadly segmented by database type, application, and industry vertical. By database type, it includes document-based, key-value, column-family, and graph databases, each serving unique use cases. Applications span customer relationship management, content management, and web applications. The software is tailored to meet the diverse needs of small and medium enterprises (SMEs) as well as large corporations.
In terms of industry verticals, the technology sector leads adoption, followed by retail, healthcare, and financial services. The flexibility of NoSQL databases to support multi-model architectures has enhanced their appeal across industries. Furthermore, the rise of edge computing and AI-driven applications is expanding the scope of NoSQL solutions in niche domains like logistics and smart cities.
Get a sample PDF of the report at –
https://www.wiseguyreports.com/sample-request?id=593078
➤ Market Key Players:
Key players in the NoSQL software market include,
• Hazelcast
• Couchbase
• IBM
• Amazon Web Services (AWS)
• Redis Labs
• Microsoft
• ScyllaDB
• Neo4j
• MarkLogic
• Oracle
MongoDB remains a dominant force with its Atlas platform, providing a cloud-native NoSQL database solution. AWS DynamoDB is another major player, known for its seamless integration with AWS services and highly scalable infrastructure.
Redis Labs and Couchbase are expanding their offerings with advanced capabilities like in-memory processing and distributed architectures. These companies invest heavily in R&D to stay competitive and cater to evolving customer demands. Strategic partnerships and acquisitions have also been pivotal in shaping the competitive landscape of the NoSQL software market.
➤ Recent Developments:
The NoSQL market has witnessed notable developments, with companies launching innovative solutions to address specific industry challenges. MongoDB recently introduced features for generative AI applications, enabling businesses to build smarter applications. AWS DynamoDB continues to enhance its serverless capabilities, ensuring developers can manage large workloads efficiently.
Partnerships between NoSQL vendors and cloud service providers are increasing, driving integrated solutions that streamline data management. Additionally, the open-source community plays a significant role in the market, with frameworks like Apache Cassandra and Neo4j gaining traction. These advancements reflect the dynamic nature of the NoSQL market, adapting to meet modern demands.
➤ Market Dynamics:
The NoSQL software market is shaped by key drivers such as the growing need for big data analytics, the shift towards cloud computing, and rising adoption of microservices architecture. Enterprises increasingly prefer NoSQL databases for their ability to support distributed systems and offer high availability. Furthermore, the rise of IoT and AI applications is propelling demand for NoSQL solutions capable of handling massive datasets.
However, challenges such as a lack of skilled professionals and data security concerns persist. Vendors are addressing these issues by offering user-friendly platforms and robust encryption features. As enterprises continue to modernize their IT infrastructure, the demand for NoSQL databases is expected to grow, bolstering market expansion.
➤ Regional Analysis:
North America dominates the NoSQL software market, owing to its advanced technological infrastructure and high adoption of digital transformation strategies. The region’s strong presence of key players and emphasis on cloud-based solutions further solidify its leadership position. The United States, in particular, drives growth with significant investments in big data and AI technologies.
In the Asia-Pacific region, rapid urbanization, digitalization, and the proliferation of startups contribute to increasing demand for NoSQL solutions. Countries like China and India are emerging as key markets due to their growing IT sectors. Europe and the Middle East also present opportunities as organizations in these regions transition to modern data management systems.
➤ Top Trending Reports:
• Zirconium Fluoride Optical Fiber Market –
https://www.wiseguyreports.com/reports/zirconium-fluoride-optical-fiber-market
• Self Organizing Network Son Testing Solutions Market –
https://www.wiseguyreports.com/reports/self-organizing-network-son-testing-solutions-market
• Broadband Access Service Market –
https://www.wiseguyreports.com/reports/broadband-access-service-market
• Poe Optical Fiber Switch Market –
https://www.wiseguyreports.com/reports/poe-optical-fiber-switch-market
• Mobile User Objective Systems Market –
https://www.wiseguyreports.com/reports/mobile-user-objective-systems-market
• Circuit Switched Fallback Csfb Technology Market –
https://www.wiseguyreports.com/reports/circuit-switched-fallback-csfb-technology-market
• Massive Machine Type Communication Mmtc Market –
https://www.wiseguyreports.com/reports/massive-machine-type-communication-mmtc-market
• Serial To Fiber Optic Converters Market –
https://www.wiseguyreports.com/reports/serial-to-fiber-optic-converters-market
• Unified Communications As A Service Ucaas In Healthcare Market –
https://www.wiseguyreports.com/reports/unified-communications-as-a-service-ucaas-in-healthcare-market
• Nebs Server Market –
https://www.wiseguyreports.com/reports/nebs-server-market
About US:
Wise Guy Reports is pleased to introduce itself as a leading provider of insightful market research solutions that adapt to the ever-changing demands of businesses around the globe. By offering comprehensive market intelligence, our company enables corporate organizations to make informed choices, drive growth, and stay ahead in competitive markets.
We have a team of experts who blend industry knowledge and cutting-edge research methodologies to provide excellent insights across various sectors. Whether exploring new market opportunities, appraising consumer behavior, or evaluating competitive landscapes, we offer bespoke research solutions for your specific objectives.
At Wise Guy Reports, accuracy, reliability, and timeliness are our main priorities when preparing our deliverables. We want our clients to have information that can be used to act upon their strategic initiatives. We, therefore, aim to be your trustworthy partner within dynamic business settings through excellence and innovation.
Contact:
WISEGUY RESEARCH CONSULTANTS PVT LTD
Office No. 528, Amanora Chambers Pune – 411028
Maharashtra, India 411028
Sales: +91 20 6912 2998
This release was published on openPR.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Microsoft and MongoDB have enhanced their collaboration to provide improved AI applications through a significant expansion of their partnership.
The extended partnership introduces three new key capabilities aimed at enabling joint customers to enhance AI application development. The capabilities focus on enhancing large language models (LLMs) with proprietary data, generating real-time business insights, and offering tailored deployment solutions using MongoDB.
One of the enhancements allows customers to use their own proprietary data stored in MongoDB Atlas to improve AI model performance and accuracy. This aims to facilitate the creation of more intelligent and customised AI applications by leveraging data unique to each business.
Additionally, a new synchronisation feature between MongoDB Atlas and Microsoft Fabric enables the extraction of near real-time business insights. This new capability promises quicker analysis and decision-making through real-time analytics, AI-based predictions, and business intelligence reports.
On the deployment front, MongoDB Enterprise Advanced (EA) can now be deployed across various environments, including on-premises, hybrid, and multi-cloud. This is made possible by its certification as an Azure Arc-enabled Kubernetes application, which provides customers with increased flexibility and control over data infrastructure.
In support of this partnership enhancement, Alan Chhabra, Executive Vice President of Partners at MongoDB, stated, “We frequently hear from MongoDB’s customers and partners that they’re looking for the best way to build AI applications, using the latest models and tools. And to address varying business needs, they also want to be able to use multiple tools for data analytics and business insights. Now, with the MongoDB Atlas integration with Azure AI Foundry, customers can power gen AI applications with their own data stored in MongoDB. And with Open Mirroring in Microsoft Fabric, customers can seamlessly sync data between MongoDB Atlas and OneLake for efficient data analysis. Combining the best from Microsoft with the best from MongoDB will help developers push applications even further.”
Trimble, a prominent provider of construction technology, is among the early testers of these integrations. Dan Farner, Vice President of Product Development at Trimble, commented, “As an early tester of the new integrations, Trimble views MongoDB Atlas as a premier choice for our data and vector storage. Building RAG architectures for our customers require powerful tools and these workflows need to enable the storage and querying of large collections of data and AI models in near real-time. We’re excited to continue to build on MongoDB and look forward to taking advantage of its integrations with Microsoft to accelerate our ML offerings across the construction space.”
Eliassen Group, a strategic consulting firm, also expressed positive expectations regarding the expanded collaboration. Kolby Kappes, Vice President – Emerging Technology at Eliassen Group, said, “We’ve witnessed the incredible impact MongoDB Atlas has had on our customers’ businesses, and we’ve been equally impressed by Microsoft Azure AI Foundry’s capabilities. Now that these powerful platforms are integrated, we’re excited to combine the best of both worlds to build AI solutions that our customers will love just as much as we do.”
Available in 48 Azure regions worldwide, MongoDB Atlas offers joint customers access to powerful document data model capabilities. This integration aims to accelerate and simplify the way developers build applications using structured and unstructured data.
Sandy Gupta, Vice President of Partner Development ISV at Microsoft, remarked, “By integrating MongoDB Atlas with Microsoft Azure’s powerful AI and data analytics tools, we empower our customers to build modern AI applications with unparalleled flexibility and efficiency. This collaboration ensures seamless data synchronization, real-time analytics, and robust application development across multi-cloud and hybrid environments.”
Article originally posted on mongodb google news. Visit mongodb google news
Presentation: From Local to Production: A Modern Developer’s Journey Towards Kubernetes
MMS • Urvashi Mohnani
Article originally posted on InfoQ. Visit InfoQ
Transcript
Mohnani: My name is Urvashi Mohnani. I’m a Principal Software Engineer on the OpenShift container tools team at Red Hat. I have been in the container space for about 7 years now. I’m here to talk to you about a developer’s journey towards Kubernetes.Let’s do a quick refresher on what containers are. Containers are software packages that bundle up code and all of its dependencies together so that the application can run in any computing environment. They’re lightweight and portable, making them easy to scale and share across the various environments. When run, containers are just normal Linux processes with an additional layer of isolation and security, as well as resource management from the kernel.
Security comes in the form of configuring which and how many permissions your container has access to. Resources such as CPU and RAM can be constrained using cgroups. The isolation environment can be set up by tweaking which namespaces the process is added to. The different categories of namespaces you have, user namespaces, network namespaces, PID namespaces, and so forth. It really just depends on how isolated you want your container environment to be. How do we create a container? The first thing we need is a containerfile, or a Dockerfile. You can think of this as the recipe of what exactly goes inside your container.
In this file, you will define the dependencies and any content that your application need to run. We can then build this containerfile to create a container image. The container image is a snapshot of everything that was in the recipe. Each line in the recipe is added as a new layer on top of the previous layers. At the end of the day, we compress all these layers together to create a tarball. When we run this container image, that’s when we get a container.Since containers are just Linux processes, they have always existed. You just had to be a Linux guru to be able to set up all the security, isolation, and cgroups around it. Docker was the first container tool to make this more accessible to the end user by creating a command line interface that does all the nitty-gritty setup for you, and all you have to do is give it a simple command to create your container.
Since then, many more container tools have been created in the open-source world, and they target different areas of the container space. We have a few listed on the slide here. We have Buildah that focuses on building your container images. Skopeo focusing on managing your container images. Podman, that is a tool for not only running your containers, but also develop and creating pods. There is CRI-O, which is a lightweight daemon that is optimized for running your workloads with Kubernetes. Kubernetes itself, which is a container orchestration platform that allows you to manage your thousands of containers in production. Together, all these various container tools give you a holistic solution, depending on what area you really need to focus on in the container space. For this talk, I’m going to use Podman, which is an open-source project, to highlight how we can make a developer’s journey from local to prod, seamless. A few things I would like to mention is that, Podman is open source and completely free to use. It is daemonless, focuses on security first, and is compatible with all OCI compliant container images and registries.
Towards Kubernetes
You’ve been running your containers locally, how’d you get to production? There are a few key challenges in going there. Some of them are paranoid sysadmins, different technologies and environment, and a different skill set as well. We call this the wall of discrepancies. Security doesn’t match up. You have low or no security in your local dev environment while production has highly tightened security. Container processes have different permissions available to them. Locally, you have root privileges available, while in production rootless is required. In fact, even the way you define your container is different between the two environments. All of this just adds a lot of overhead for the developer and can definitely be avoided. Let’s take a look at how we can target some of these. When you run a container locally with a tool like Podman, you can use a bunch of commands and flags to set up your container. I have an example here where I’m running a simple Python frontend container and I want to expose the port that’s inside it.
To do that, I have used a publish flag so that when I go to localhost, port 8088, I’m able to access the server that’s running inside that. Another way that you can define or you can run containers locally is using a Docker Compose file. This is a form of YAML that the Docker Compose tool understands. Here’s an example of how you would define that. Let’s say you have your container running locally. You want to now test it out in a Kubernetes environment. Wouldn’t it be great if you could just copy paste either the CLI command that you have there, or the Docker Compose and just paste in the cluster? Unfortunately, you cannot do that. For those of us here who have run in Kubernetes before, know that Kubernetes has its own YAML format to define your container and your workloads. As you can see, there are three formats going on around here, so when you want to translate from your local dev to a Kubernetes environment, you have to translate between these formats. That can be tedious and can also be error prone, as some configurations could be lost in the translation, just because fields that are named one way in the Kubernetes YAML format are not exactly the same in the flags that are used in the CLI.
You really have to keep referring back to documentation to figure how they map around. This is where the podman kube command can help. In an effort to make the transition from Podman to Kubernetes and vice versa, easy on the developer, Podman can automatically generate a Kube YAML for you when you pass it a container ID or a pod ID that’s running inside Podman. At this point, you can literally do a copy and paste of that Kube YAML file, put it in your cluster and get running. Of course, users can further tweak that generated Kube YAML for any specific Kubernetes use cases or anything that they want to update later on.
I mentioned vice versa, so you can go from Podman to Kubernetes, but you can also go from Kubernetes to Podman with one command. Let’s say you have an application running in your Kubernetes production cluster. Something goes wrong with it, and you really want to debug it. You have some issues getting the right permissions, or access to try and figure it out on the production cluster itself, and you wish you could just replicate that workload locally here. Good news for you is that you can do that with the podman kube play command. You just have to go into your cluster, grab the Kube YAML of that workload, pass it to podman kube play, and Podman will go through all the container definitions, pod definitions, any resources defined in that, and create it on your local system, and start it for you. Now you have the same workload running locally, and you have all the permissions and access you need to be able to debug and test it, just with two commands, podman kube generate and podman kube play.
Outside of Kubernetes, Podman also works really well with systemd. You can use Podman and systemd to manage your workload using systemd unit files. This is especially useful in Edge environments where running a Kubernetes cluster is just not possible due to resource constraints. Edge devices are also a form of production environment, where you’re running your applications there. As we can see here, when you want to do that with systemd, systemd has its own different format. In addition to the three that we just spoke about, there’s a fourth one that you probably have to translate your workloads to if you want to move them to Edge environments. In the effort of standardizing all of this and making it easy for the developer, Quadlet was added to Podman. What Quadlet does is that it’s able to take a Kube YAML file, convert it to a systemd unit file under the hood, and start those containers with Podman and systemd for you, so the user doesn’t have to do anything. All you need is that one Kube YAML file that defines your container workload, and you can plug it into Podman, into a Kubernetes cluster, and into an Edge device using systemd.
Rootless First
That was on the container definition. Remember I mentioned that Podman focuses on security first. This can actually be seen in its architecture. Podman uses a fork-exec model. What this means is that when a container is created using Podman, it is a child of itself. This means that root privileges are not required to run. If someone is trying to do something weird on the machine, when you take a look at the audit logs, you can actually trace it back to exactly which user was trying to do what. When you compare that to the Docker daemon, root access is required, although now you can set up rootless context. If someone is trying to do something weird, when you take a look at the audit logs, it points to this random UID, which essentially is the Docker daemon, but it doesn’t tell you which user was trying to do what there. In the rootless first scenario, there are two things to keep in mind. When you run your container, you want to run it as rootless on the host, which is default when you run with Podman.
You also want your container application that’s running inside the container to be run as a rootless user. This is something that is often overlooked, because just running rootless on the host is considered enough, and container engines, by default, give you root privileges inside the container when you just start it up. This is something that developers usually don’t keep in mind. When you are running in a production Kubernetes based cluster, that is focused on security, so something like OpenShift, running rootless inside the container is a hard requirement. Keeping this in mind and practicing it while you’re doing your development will save you a lot of headaches when you then eventually translate from your local development to a production cluster. In the rootless first scenario, you want to run rootless inside and outside of the container.
Security
Continuing with the security theme, when you use Kubernetes out of the box, it provides you with three pod security standards. These are privileged. Here, your container process has basically all the permissions and privileges possible. You definitely do not want to be using this when you’re running in production. Second one is baseline. Here, your process has some restrictions, but not enough restrictions where you’re banging your head on the wall trying to get your container working. It’s secure, but it’s also broad enough to give you the ability to run your containers without issues. The third one is restricted. This is the one that’s heavily restricted. You basically have zero or very little permissions. This is probably the one you want to use when you’re running in production, but it’s often the most difficult to get started with. We always advise that you start with baseline, like middle line, get there somewhat, and then continue on tightening the security. Let’s take a deeper dive into security. There are two key aspects to it. The first one is SELinux. SELinux protects the host file system by using a labeling process to allow or deny processes from accessing any resources on the system. In fact, any file system CVEs that have happened in the past have been mitigated if you had SELinux enabled on your host.
To take advantage of this, you need to have SELinux enabled both on the host and in the container engine. Podman and CRI-O are SELinux enabled by default, while other container engines are not. If you’re running a Kubernetes cluster using CRI-O, you will have SELinux enabled by default. If your host also has SELinux enabled, then your file system is protected by the SELinux security policies. Always setenforce 1. The second one is capabilities. You can think of capabilities as small chunks of permissions that you give your container process. The more capabilities your container has, the more privileges it has. On the right, this is the list of capabilities that Podman enables by default. It’s a pretty small list. It has been tightened down enough that you are secure, and also, you’re able to still run your containers without running into any security-based issues. When we compare this with the list allowed by the baseline pod security standard given by Kubernetes, they have the same list and actually have a few more capabilities that you can enable as well. When you run in production, you probably want to have even fewer capabilities enabled so that you can shrink your attack surface even further.
To reiterate on the two themes over here, one is that Podman makes it easy for you to transition between your local environment and your pod environment by giving you the ability to translate your CLI commands to Kube YAMLs, or by just being able to understand a Kube YAML and being able to plug that into Podman, Kubernetes, and systemd. The second one is, Podman’s focus on security first helps you replicate an environment that is secure, or quite secure to match what you would expect in a production environment.
Obviously, it’s not going to get you 100% there, but it can at least get you 50% there, so when you do eventually transition over, you run into fewer frictions and have already targeted some of the main security aspects that come when you move to production. With Podman, you can run your containers. You can also run pods, which gives you an idea of what it’s like to run in Kubernetes, because Kubernetes is all pod based. You can run your Docker Compose YAML file with one command. You can convert it to Kube YAML, and deploy those to Kubernetes clusters like kind, minikube, OpenShift, vanilla Kubernetes itself. All of these capabilities and tools are actually neatly put together in a desktop application called Podman Desktop that runs on any operating system. It works on Mac. It works on Linux. It works on Windows. In fact, I’m using a Mac, and I will show you that.
Demo
This is what the desktop app looks like. I’m running on a Mac. I’m on an M2 Mac right now. It gives you information on what Podman version we are running right now, and just some other resources. On the left, we have tabs to take a look at any containers that we have, any pods, images. I’ve already pulled down a bunch of images. You can see the volumes. You can see any extensions that are available. Podman has a lot of extensions available to help you interact with different tools. You can set up a kind cluster, or you can set up a minikube cluster. You can talk to the Docker daemon, if you would like to do that. You can set up Lima as well. There’s a bunch of extensions that you can enable to take advantage of these different tools. For the demo, I am going to start a simple Python application that’s running a web server. This is just the code for it. I have already built and pre-pulled my images down because that takes a while.
If you would like to build an image, you can do that by clicking this button over here, build, and you can browse to the file where your containerfile is stored. In Podman, you can select the architecture you want to build for, and it will build it up for you. Since I already have it built, I’m just going to go ahead and run this container. I have my Python application as a web server that also has a Redis database that I need for the application. You’ll see why once I start it. First, I’m just going to click on this to start it up, give it a name, let’s call it Redis. I’m going to configure its network so that my Python frontend can actually talk to it once I start that. My container is up and running. When it starts, there are all these different tabs that you can take a look at. The logs obviously show you the logs of the container. We can inspect the container, so this gives you all the data about the container that’s running, any information you may or may not need.
By default, we create the Kube YAML for you as well. If you want to just directly run this in a Kubernetes cluster, you can just copy paste this, and deploy it there. With the terminal, you can also get dropped into a shell in the container and play around with it there. Now when I go back to my containers view, I can see that I have the Redis container running. Now let’s go ahead and start the frontend. Let’s give it the name, python frontend. I need to expose a port in this one so I can access it. I’m going to expose it to port 8088. I’m going to go back here and add it to the same network that I had added the Redis database to. Let’s start that. That’s up and running. Similar thing here, you can see the logs. You can inspect the container. You can see the Kube YAML. It can also be dropped into a terminal over here. Same thing. When I go back to my containers, now I see I have two containers running. This is running locally on my Mac right now. Since I’ve exposed the port 8088, let’s go to a browser window and try and talk to that port. There you go. That’s the application that I was running. Every time someone visits this page, the counter will go up by 1, and that is what the Redis database is needed for to store this information. That was me running my container locally.
Let’s say that I want to put this in a pod to replicate how it would run when I run it in Kubernetes, but I still want to run it locally on my machine using Podman. Very simple to do. Go ahead and select this. You can put one, or probably as many containers as you would like in a pod. I’ve not tested the limit on that, but if you do find it out, you can do that. Then I’ll click on that create pod button that showed up there. Click on create pod here. What it will do is now it will create my pod with these two containers inside it. You can update the name of the pod to whatever you would like it to be. I have just left it as my pod. Here we can see I have three containers running inside it, one is the infra container. Then I have the Redis and the Python frontend containers.
Yes, when I click on that, I can actually see the containers running inside it. Same thing with the pod here, when you go you can see the logs in there. I can see the logs for both the containers. You can inspect the container, and you can also get the Kube YAML for the whole pod with both the containers inside. When I go back to containers here, we can see that the first two containers that I had started have been stopped in favor of this new pod with these containers inside it. It’s still exposed at port 8088, so let’s go ahead and refresh. As you can see, the counter started back by 1 because a new container was created, but every time I refresh, it’s going to go up. I successfully run my container and I podified it. That’s what we call it. This is all local on Podman.
Now I have this pod. Let’s say that I want to now actually deploy it in the Kubernetes cluster, but I’m not ready to deploy it in a prod remote Kubernetes cluster yet. I want to test it out still locally using something like kind or minikube. As I mentioned earlier, Podman has those extensions. If you go to resources, you can actually set those up with the click of a button. I have already set up minikube on my laptop right now. We can, in fact, see the minikube container running inside Podman over here. If I go to my terminal and I do minikube status, you can see that my miniKube cluster is up and running. Podman also has this tree icon over here where you can see the status of Podman machine and get to the dashboard. It has this Kubernetes context thing as well. In the kubeconfig file that’s on your laptop, you can sign into multiple different Kubernetes clusters, as long as you have the credentials for it. It can see the context of those different clusters available to you, and you can switch between them.
You can decide which one you want to deploy to, which one you want to access, which one you want to see which pods are running in. Right now, I want to do it on minikube, which is running locally on my computer. That’s what I have selected. Now all I do is I go to this icon over here, I click on deploy to Kubernetes. It will generate the Kube YAML file for me. You can tell it which namespace you want to deploy it into. I just wanted a default namespace, and I’ll click on deploy. When we do that, we can see that pod was created successfully and the containers are running successfully. When we go to my terminal and we do kubectl get pods, we can see my pod is up and running over there. We can also actually see this on the Podman Desktop interface, when we go to pods.
Podman Desktop is able to see which pods and deployments are running in the Kubernetes cluster you’re currently pointing at, and it will tell you that this is the pod. You can see that the environment is set to Kubernetes, so you know it’s the Kubernetes cluster and not your local Podman. Now, same thing here. Let’s get the service so we can see my-pod-8088 services there. I want to expose this so I can actually access the web server running inside it. I’m just going to do minikube service on that, and run that. There you go. It opened a browser for me with that new container and minikube cluster. Every time I refresh, the counter goes up by 1. I was able to, with a click of a button, deploy my container that I had running locally on Podman into a minikube cluster.
What’s the next step? Pretty obvious. You want to deploy it remotely into a cluster that’s probably a production cluster, or a cluster that you test on right before you send it out to production. The really easy way of doing that is basically the same steps again. I’m going to go over here and switch out my context to point to a remote OpenShift cluster that I have running on AWS right now. I’m going to click that. When we do that, we’ll see that you no longer see the pod that’s running in minikube, because now it’s pointing to my OpenShift cluster. I can just go ahead here and do the same thing, deploy to Kubernetes. It would have been deployed on the OpenShift cluster. You would have just switched the context and it would have done the same thing it did with minikube, and launched it over there. It was pretty cool, since we would expose the port that we had the application running on, it was running in an AWS environment.
This was just demoing, moving from local to prod. I did all of this using the Podman Desktop UI. If you’re someone like me who really prefers to use the terminal and type instead of clicking on a bunch of buttons, all of this can be done using the Podman command line interface as well. You can do podman images, it will show you a list of all your images. You can do podman ps, it will show you a list of all your containers running. You can do podman pod ps, and it will show you your pods running. I mentioned that you can also go from prod back to local or to Podman. You can also do that by going back to the Podman Desktop app and clicking on this play Kubernetes YAML button over here. You can browse and point it to the Kube YAML that you wanted to play. You can select whether you wanted to run with Podman or run in the Kubernetes cluster that you’re currently pointing at. That’s something you can do. I’m not going to do that from here. I want to show you how it works with the command line, so I’ll do it from there. This is basically the Kube YAML that I’m going to play.
Very simple. It’s there. It’s a very simple nginx server that I have defined over here. I’m going to go back into my terminal, and let’s do podman kube play. I’m going to set the publish-all flag, just because I want to expose the port that I have defined in there, and pass it the kube.yaml. There you go, the pod was created. When we do podman pod ps, we can see the nginx pod was created. When we do podman ps, we can see the nginx container was also created over here. We can see that it’s exposed at localhost port 80. We can go back to our browser and we can go to localhost 80, and nginx server is up and running. With the Kube YAML file, I was able to just do podman kube play, and get that running locally with Podman. That is basically the demo I had for you, that highlighted that path of moving from Podman to Kubernetes, Kubernetes back, and all the different stuff that you can do with the different ways you can test, play, and deploy eventually to your production cluster.
Podman Desktop
Podman, you can use to run, build container images. You can run your containers and pods. It integrates really well with Kubernetes. As we saw, it has all those features to be able to easily deploy to Kubernetes and pull it back from there. It has a concept of pods to help you replicate what a Kubernetes environment would look like when you do run your workloads in Kubernetes after containerizing them. You can do image builds, set up the registries you would like to pull images from, load images for testing, and all of that. With the click of a few buttons, you can set up a kind cluster locally with Podman, a minikube cluster locally, and can connect to various Kubernetes contexts. One thing I’d like to highlight again is the extensions that Podman supports. We have support for kind, minikube, Docker daemon, OpenShift local, Lima, and many more. It’s just a way of giving all of these tools to the developers so that they can play around with it and have access to everything and anything they might need when developing their containerized workloads.
K8s and Beyond
I know this talk focuses on Kubernetes, but there’s a lot more the developer might need, and there are a bunch of cool features that have been added recently to Podman and Podman Desktop. One thing is AI Lab. AI is really big right now. We’re all super excited about it, and so is Podman Desktop. They added a new extension called AI Lab, where you can run your AI models locally, so that you can then create your container applications, using that as an inference point, basically. The next one is Bootc, where you can create and build bootable container images. The idea here is that, in future, your operating systems will be installed and upgraded using container images. I think it’s pretty cool. It’s still pretty much under development, but you have the ability to start playing around with that right now.
The final one is farm build, which is actually a feature I worked on personally, where you can build multi-arch images from one system. Given that Silicon Macs are so popular nowadays, having the ability to have different architecture images is very important now. In fact, I actually used this command when I was creating the demo for this talk, because my Mac is an M1 architecture, so I was doing all of that with Podman Desktop on my Mac. If OpenShift AWS had worked, that was on an x86 architecture, so I would have needed that architecture image for that part of the demo. If you’re excited by all of this, one of my colleagues has put together a bunch of demos and info on all of this. You can find that at this link.
AI Lab
I can show you the AI Lab extension in Podman Desktop, just because I think it’s very cool. Back to the Podman Desktop, I’ve already enabled it. I just click on AI Lab over here, and it gives me this view. I can take a look at the recipes catalog. These are some things that it gives you out of the box. You can set up a chatbot, or summarize code generation. I’m going to set up a quick chatbot. I’ll click on Start AI app. What it does, it checks out the repository, it pulls down the model. I chose to pull down the granite model, but there are different bunch of models you can pull down from InstructLab. It sets up the llamacpp-server and Streamlit chat app. When I go to this running tab, I can see that app is up and running, and I can just go to the interface that they provide me by default. Let’s ask it a question.
Let’s ask it, what is InfoQ Dev Summit? It’s going to do its thinking and give us an answer. I’m just using the interface that they gave me, but while you’re developing your applications, you can also just connect to it for your use case. I haven’t really given it much resources right now to run. That’s why it’s pretty slow. The more powerful machine you have, the better your performance would be. I think it gave us a pretty accurate answer on what InfoQ Dev Summit is. With the click of a few buttons, I have a personal chatbot running on my machine with Podman Desktop. Then there’s also the Bootc extension over here. This helps you create those bootable OS container images. You click on this, it gives you the ability to be able to switch between disk images and all of that. That’s something you can also play around with.
Get Started with Podman and Podman Desktop
Podman is open source, completely free to use. Same for Podman Desktop. There’s a pretty big community around it, discussions, PRs, issues, contributions, everything are welcome. You can check out our podman.io document page to get started.
See more presentations with transcripts