Mobile Monitoring Solutions

Search
Close this search box.

Tesla Introduces Official Developer APIs for Third-Party Integration

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

Tesla has recently unveiled its first API documentation to support the integration of third-party applications. While primarily designed for fleet management, these APIs have captured the interest of developers, who see it as a potential starting point for the development of an app ecosystem.

Using the new APIs, an application can request vehicle owners’ permission to view account information, get vehicle status, and issue remote commands. Vehicle owners maintain control over which application they grant access to, and can change these settings at any time.

While reverse-engineered APIs have been used for many years, an official option was not available even if the car manufacturer discussed in the past the option of a software development kit and creating a third-party app ecosystem. Frédéric Lambert, editor-in-chief of Electrek, writes:

The move has likely something to do with Tesla recently releasing new in-car fleet management and rental software with Hertz (..) It likely had to make access official through an API for the project, and now it is making it available to everyone. That’s good news because there were a few thriving businesses that were created around making third-party apps for Tesla, but they operated in a grey zone making them a bit shaky. Now if those apps can operate with the official API, they will become legitimate businesses, and it could encourage more to come.

The following API endpoints are currently documented: charging endpoints, partner endpoints, user endpoints, vehicle endpoints, and vehicle commands. The documentation provides examples for cURL, JavaScript, Python, and Ruby requests. For example, the following curl request performs a navigation_gps_request command to start navigation to the given coordinates:

curl --header 'Content-Type: application/json' 
  --header "Authorization: Bearer $TESLA_API_TOKEN" 
  --data '{"lat":45.65292317088107,"lon":13.765238974015045,"order":"integer"}' 
  'https://fleet-api.prd.na.vn.cloud.tesla.com/api/1/vehicles/{id}/command/navigation_gps_request'

Mark Gerban, connected car strategist at Mercedes-Benz, comments:

Curious if this will help developers generate any revenue, since if they want significant traction and pick up some big players with bigger features, they’ll need to offer something in return.

In a popular thread on Hacker News, many developers are excited but James Darpinian, graphics and computer vision engineer, warns:

Almost all of this functionality has been available for many years through the reverse-engineered API used by the official Tesla app (…) The difference here is that Tesla is creating a new, officially supported API explicitly for third parties, with official documentation, scoped authentication, and a developer program that requires registration (and in the future, payment). Presumably, once the SDK is finalized they will start cracking down on apps using the older reverse-engineered API.

Earlier this year, Tesla introduced Fleet Telemetry, a server reference implementation for Tesla’s telemetry protocol. The service allows developers to connect directly to their vehicles, handling device connectivity and receiving and storing transmitted data. A configured device establishes a WebSocket connection to push configurable telemetry records and the Fleet Telemetry provides clients with ack, error, or rate limit responses.

To get an API key and interact with the API endpoints, developers must create a Tesla account, follow the onboarding process, and request approval.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Teams, Teamwork and Generative AI as a Team Member

MMS Founder
MMS Jeremiah Stone

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I’m sitting down with Jeremiah Stone. Jeremiah is the CTO of SnapLogic and comes to us today from California. Jeremiah, welcome. Thanks for taking the time to talk to us.

Jeremiah Stone: Thank you, Shane. I’m delighted to be here.

Shane Hastie: Thank you. My typical starting question with a guest is who’s Jeremiah?

Introductions [00:29]

Jeremiah Stone: Well, I’m a father, I’m a technologist. But more than anything, Shane, I think the essence of who I am is somebody who loves getting as close as possible to really devilishly hard problems with other like-minded people and solving them. So I’m a problem solver, I think, would be the bottom line of who I am and what I really enjoy doing.

Shane Hastie: So tell us a little bit about your background and what got you to the role of CTO at SnapLogic.

Jeremiah Stone: Sure. Well, I started off my career as an undergraduate in mathematics and computer science, and I was fortunate enough at that time to gain a research assistant role at the National Center for Atmospheric Research in Boulder, Colorado. And that’s where I really fell in love with the notion that you could actually improve the world, help it run better using technology. I had the good fortune to work on a project that was helping to detect a clear air turbulence for airports. And so it was a very tangible, “Wow, if I build this thing, it could save people’s lives.” Very heady stuff as a student, and I was hooked. And so over the course of a career, which also included time at SAP building enterprise applications at General Electric, building and delivering industrial applications. I found my way back into that world of predictive applications about a decade ago, and I’ve been working in the AI and ML enabled application world ever since.

And during that time, all of the big gnarly problems I was dealing with were connecting valuable bits together into larger systems. And I was very frustrated about the fact that you could really build a great individual component of a system, but to get it all to work together in an elegant ongoing way turned out to be like half or more of the work. And that drove me nuts. And so I went and found a company that was using AI and machine learning to solve this problem. And that’s how I ended up at SnapLogic is because they are crazy like I am and believe that integration can be learned, and that’s what we’re trying to do.

Shane Hastie: One of the things that you mentioned in the chat we were having earlier was focusing on ways to work together to build great products. So what are some of the key things when it comes to the people aspects, the ways that we can and do work together?

The team is the primary unit of value generation in product development [02:34]

Jeremiah Stone: I think in my experience, having been in many different environments as an individual contributor, as a member of a team, leading teams, is that the team is the right unit of operations, first of all. So the very first thing is to set your mindset that as we think about how to build, manage, monitor, maintain anything, that often the first most important thing that I need to deal with in any environment I’m in is to help the groups that I work with to stop thinking in terms of individual disciplines and start thinking in terms of a team that has different skills and capabilities. I’d say if there’s one great learning in my time building a well-gelled team that has the right different professional capabilities is the single hardest thing to do, and yet the single most valuable thing you can do in order to be successful building technology.

And these days, I think that really for software development, at any rate, consists of a blend of professions, really professions that in and of themselves are difficult to master. And that means creating that team is very difficult as well because you have to find individuals that can, for example, do quality engineering, software development, user experience, documentation, product management and so on. Those are the core disciplines, and then you start to be able to have a team that can actually dig into three disjoint problems. Is it technically feasible? Is it viable from a business point of view, and is it desirable from a human point of view? And the center of that Venn diagram is where greatness happens, but that starts with a team. So I’d say my greatest learning is to understand that you have to first build the team before you can build the thing.

Shane Hastie: How do you move people from being a group of individuals to being a team?

Techniques to assist team formation [04:17]

Jeremiah Stone: Man, if I could answer that succinctly, I would be a very, very wise man indeed. I can share some of the learnings I have had in time, and I’m reminded of the book Peopleware by Tom DeMarco and Timothy Lister, I believe, where they basically say, “I can’t tell you how to build a great team, but I can tell you the things that will prevent you from building a great team.” And I believe they titled that chapter Team Aside. And I think my experience supports DeMarco and Lister’s thesis there in that the formation of the great team is only something the team can do itself. But I think there are things that we can do as leaders and as teammates that create the possibility for the creation of a great team. And the things that come to my mind, he had, I think, a list of 10 or 15 things.

There’s only a few that I’ve seen happen over and over and over again. I think first and foremost, creating an environment with shared risk and shared success. Very often when we organize to build technology, we organize humans within their disciplines and then we create a team out of them, which means that if you are in engineering and you report up to the head of engineering and you’re in product management, you report the head of product, your design into design, you can actually create three individual definitions of success. And if we have three individual definitions of success, I don’t see that team forming. And so that’s often a thing that I’ve had to work at over and over again is saying when it comes to a performance review, when it comes to how we determine whether we’ve failed or succeeded, let’s define something at the team level and then the team fails or succeeds together.

I think shared success, shared risk. I also think the way we define delivery turns out to really matter. We’re very fond of setting arbitrary deadlines based on insufficient knowledge. And when you launch any kind of an effort or project, by the nature of what we do, we don’t know what we had to do to build the thing until we’ve actually built it; therefore, we have no way of estimating the possibility of building it. And we have all of these interesting techniques like story points and other approaches to doing that, but fundamentally it’s unknowable. And so I think having a team that is able to set a goal rather than a delivery deadline and then give updates on the progress and challenges against that goal in a timely manner to people who can help them, I’d say those are the two things. If I were to boil down Lister’s entire list, and those two things for me are my most important learnings, shared success and risk, and a goal orientation rather than a deadline or estimated delivery orientation, a goal target as well as the ability to surface blocks or progress against that.

Shane Hastie: There’s a lot going on right now in the realm of generative AI. It’s very disruptive, I know, and this is one of the reasons we’re talking is because you are actually actively involved in looking at the impact of this and how developers in particular are using these tools. But stepping away a little bit from that, at a conceptual level, where are we going as an industry with these generative AI tools?

AI is decreasing the level of deep technical knowledge required to create code and making it more accessible to people who have deep domain knowledge in other areas [07:18]

Jeremiah Stone: I see right now and we don’t know, but based on what we do know, I think there are two immediate pretty clear directions that we’re heading. I think both of them are consistent with what we’ve been doing as an industry for decades anyway, and that is making the ability to create information or technical products more accessible to individuals who don’t have the same level of advanced training. And I think we’ve done this with integrated development environments. We’ve done this with low-code, no-code tooling. I have a daughter in high school and she’s using a visual programming model that was actually developed here at University of California, Berkeley. And many of us with kids have scratch programs we’ve seen the kids develop. So I think part of where this is taking us is decreasing the level of deep technical knowledge required to create code and making it more accessible to people who have deep domain knowledge in other things.

So maybe you have deep domain knowledge in how to run a transportation network, how to run a bus network, and you have an idea for some technology or software that could help you do that. Well, you’re unlikely to have a computer science degree and be able to develop it. But now increasingly we are creating the ability for people with less technical aptitude to be able to actually build things. And whether that is what we’ve done with cloud and making the startup cost much lower because we can abstract the infrastructure and use a software interface to create that, or with the programming experience, we’re making it more available. An example for that would be the whole text to SQL world where we want to give people who have great questions for their data, structured relational data, the ability to simply ask … We’ve said this for decades, ask questions of the data.

AI is not significantly increasing developer productivity because writing code is such a small part of the development process [09:04]

Well, now we’re getting to a point where you can actually ask questions of the data, which is kind of interesting. So I think that’s one, increasing accessibility. That seems pretty clear. I feel confident about that. The other one that’s very, very, very hyped right now is increasing productivity of engineers and productivity of engineers being measured, I guess, in if we use DORA metrics, how often people are pushing to production, how often are there change failures and rollbacks and that sort of thing. I’m less confident about the ability for generative AI to increase developer productivity predominantly because writing code is only a very small subset of what developers actually do. And so I think that we can probably see a bit of a levelling of the quality of code produced that will happen in the industry. So we’re basically helping to bring people up who may not have had the good fortune to work under the tutelage of more senior engineers and have learned tips and tricks that is very oral, in our industry is a lot of oral knowledge that’s passed down from senior engineers to more junior engineers.

I think that might be helpful, but I still think that what we’re not addressing is that kind of fundamental analysis and design component. So if I were to give you the very succinct answer, where are we headed as an industry, I think one, we’re making the ability to create software more accessible to people who have lower levels of technical skill. And two, I think we are further increasing the, let’s say, first delivery quality of code that’s produced by professional software developers. And both of those are good things. What I do not and vehemently disagree with is I don’t think this is putting developers or coders out of a job anytime soon, and I don’t think that we’re going to experience hyper increases in individual developer productivity across industry.

Shane Hastie: That’s interesting. It feels to me that there’s almost a contradiction between the two things you were talking about there though. On the one hand, we’ve got people who are not technology experts producing technology products more easily, on the other we’ve got the “quality” of the code that the technologists are producing getting better. Are we saying that these non-technology experts will produce something that won’t be of high quality? Where does it hand off between the two, I suppose?

The distinction between skilled amateur with AI and professional developers [11:28]

Jeremiah Stone: I think I look at it a little bit differently. So let’s draw a metaphor to power tools. So I, as a very mediocre DIY weekend carpenter around the house, I can use a power drill to drill a hole and bang together a table, et cetera, and that table can hold up food and it can support things. And that’s great because if we didn’t have power tools, then I’d have to use a hand driven approach to doing that, that I couldn’t even build the table at all. So I can build something functional and it’s capable. On the other hand, I am not a furniture maker and I cannot make a beautiful piece of furniture that requires significantly higher skill. But if we look at it from a functional perspective, could I do something, and I have done, I’ve built furniture and things that we use in the … backyard furniture, it’s just fine. Does it hold me up? Am I worried about it as a safety hazard? Does it service the function? Yes, it does.

Is it something that you would look at as art that an actual master furniture maker would make? No, you would not. And so I think that’s the distinction is we’re giving accessibility to create functional software systems to a much larger number of people. The ability to create highly scalable fault-tolerant capable systems that are going to run at a global scale and handle the problems you have to earn your way into through success, I think that’s a much harder proposition and that requires significantly more machinery, and there’s more to that. But in much the same way that, let’s take the data science domain, the emergent of Python Notebooks, Jupyter Notebooks has given people who couldn’t use VI or Emacs, or whatever you needed to build the multiple files to run data science, a much more accessible way, and they’re able to bring their mathematics knowledge to do something capable. That thing that they do is often an analytic output that’s highly functional.

It would not be confused with a model that could run for billions of requests per minute, et cetera, on a globally distributed network. So I think that’s what I need to do. I think these are kind of two different axes almost in terms of reliability, scalability, functionality, and accessibility to build something functional for a given population that may need it.

Shane Hastie: Thank you. A very clear metaphor there. So digging into the research, you mentioned that you are actually doing active research. You want to tell us a little bit about what you’re doing there?

Jeremiah’s research into large language models for code generation [13:47]

Jeremiah Stone: Sure. So I’m in the process actually of finishing a master’s degree in information and data science. And my specific research area has been exactly this domain of large language models for code generation or program synthesis. So two different subdomains there. The first place that I started out was what is known as a sequence to sequence problem for text to SQL. So given a question and a database, to what degree can you generate a SQL statement that will not only execute but actually return the same results as if a human expert programmer would do? That was the first. My current active research project is looking at coding copilots. So, familiar with GitHub Copilot, there’s a few other ones out on the market that are there to assist software developers in various tasks from code completion to code generation to documentation, analysis and refactoring, or in context help documentation, et cetera. Those are common skills that are being applied now.

And my specific domain of research is about how to make such programming assistance or models intelligent or higher performing on private code bases, which is nearly all enterprise code bases were not available for any of these models for training, and therefore the way these models work is that they are statistical text generators that will generate text that is most similar to the data that they were trained on. Well, if none of the private training sets were available, they will not perform well in a given context if you’re trying to create software. So that’s the research I’m working on right now is saying what is the minimal amount of work, what are the best techniques to adapt these models in a private local way to a private code base?

Shane Hastie: What’s coming up? What are you learning? Let’s start with the text to SQL. When will I be able to just, “Tell me how many clients are buying our products,” and magically get an answer.

Jeremiah Stone: That domain is really doing amazingly well. So as of right now, there’s a common benchmark, if your listeners want to look into this more, called the Spider benchmark that is run by Yale University. And this is an open benchmark where there are submissions of different approaches to that. And I believe the current state of state-of-the-art on a generalized model produces sort of in the low 80% accuracy, maybe 84%, maybe 87%, given a question and the schema of the database, it’s able to return a perfectly accurate SQL statement on its first try. So that’s remarkable, I think, if you’re in the mid 80% in a generalized model, that’s actually quite well. And I think what we’re seeing right now is a very rapid adoption of text to SQL in database tools and we can expect that very quickly that we will have high performing capable interfaces to whether it’s Redshift or BigQuery or Microsoft SQL, Synapse, et cetera, I think that’s the first place you’ll see that emerge, that your ad hoc query builder will now be something that you can type in your question to.

And then we’re already starting to see this in analytical tools, or let’s say a Tableau, Power BI looker type thing, and also in bespoke applications where now not only the question is free text, but also the answer can be free text as well, because these models are bidirectional, so you could take that numeric answer and give back a textual response. So I think that’s probably the place that most of us will experience the really capable large language models in the work environment for development and development oriented teams.

Shane Hastie: And in the private code base?

Jeremiah Stone: Much earlier days in the private code base world. Now there are tools like the GitHub Copilot that are available for these purposes. Those tools perform well from a syntax perspective, so it can generate good Java, it can generate good Python, it can do good documentation. What those tools today are incapable of doing is picking up the correct libraries that you might be using using or coding conventions. A lot of the times, the way we define success within an engineering team is that we agree on conventions of white space, line length, commenting conventions, these sorts of things so that we can have transferable work between individuals in the team. That’s an area that maybe for Python we can get sort of the PEP 8 style standards or something like that, but not a team specific coding standard. And that’s where much more active research is in play now to take and adapt these models to specific code bases. And in fact, the performance of these models on generating code is still an incredibly active domain of research.

Most recently, the most exciting recent development has been the release of Code Llama, Code Llama Python from Meta, very interesting models there. But again, these are general purpose tools that are not adapted to individual code bases. And so I think that does limit, to a certain extent, their utility for engineering teams to be using them.

Shane Hastie: So where’s it going?

The emergence of local models trained on private code-bases [18:50]

Jeremiah Stone: I really anticipate, at least based on our research, that increasingly what I’m seeing, particularly from a business private enterprise software companies, let’s take that population or enterprises themselves that are developing software, my intuition is that we are going to see the emergence of local models that are tuned on to private code bases, and that that will be probably supported by integrated development environments in the near term, whether that’s JetBrains or Visual Studio, those types of environments, or as independent plugins that you can then use to host these. And I think that we’re headed in that direction specifically because this is, again, back to our earlier point, I think extrapolating from things we already do and care about is a good idea because they have momentum and what do we care about? We care about creation of good work product. We care about quality and assessment of that quality. We care about continuity and transferability of work between team members and those are all good work practices within a team and a support with large language models for those specific coding activities supports things we already care about.

And so I think I’m pretty confident that we’ll be heading in that direction, but what we will not be able to overcome, and I experience this ourselves or in my company, we’re basically AI maximalist, we are all in and we believe this, we took a very long time to evaluate and approve the GitHub Copilot in our environment, and that’s because we really care about our intellectual property and our ability to keep that safe, secure, and represent our employees and shareholders. And I think that is a warring force against using these tools in teams. And if there were a way to overcome the privacy concerns about sending my code to a large language model and just basically hosting that to the world, then I don’t really feel like doing that. On the other hand, if it were a model that was fully contained within my environment and I wasn’t teaching Amazon, Google or Microsoft anything about my code, then that might be okay. And that’s where I think we’re headed is we’re headed to a place where this will be one of the first sort of not only self-hosted, but self-tuned models that are used in engineering practices for, really, as we talked about earlier, helping to improve the quality and consistency of code bases.

Shane Hastie: So as an engineer, I’m not at risk of being replaced by the machine, I’m going to be able to leverage the machine in my context.

Engineers working with locally tuned AI tools rather than being replaced by them [21:19]

Jeremiah Stone: I think that’s right. Andrew McAfee and Erik Brynjolfsson had a book a while ago, or at least it was an ebook, I think that said, it’s not the race against the machine, it’s the race with the machine. And it’s the teams that figure out how to team with the machine that will have a high degree of success. And I think we’re dealing with something that is a Jevons paradox. There’s a lot of fear that these AI models will take away programmers jobs, but the flip side of this is what is enough software? Is there any team that has too much capacity and they have an empty backlog? No, every team I’ve ever worked with has this infinite backlog of work and people that don’t even ask them for help anymore because they have no capacity.

So the more readily available this expertise is, the more demand there will be for it. And so I really don’t subscribe to the pessimistic dire view that this is going to put programmers out of the work. Quite the contrary, I think this is only going to create an even more insatiable demand for programmers in the workplace.

Shane Hastie: Topic that I’m personally very interested in, and I’m throwing this one at you without any notice, is the ethics of computing. You’ve mentioned that you’ve worked in high risk environments, you’ve worked in life and safety critical environments. How do we keep things ethical?

Jeremiah Stone: With regard to artificial intelligence or in what context?

Shane Hastie: I’m going to say broader context, what’s happening in our industry from an ethics perspective?

Ethical challenges in the technology industry [22:46]

Jeremiah Stone:  I hear a great deal about this topic as well, and there are so many dimensions that the fundamental definition of what do ethics mean in the context of respecting the rights of an individual as well as counterbalancing the needs of society, and basically the short form definition of doing the right thing when nobody’s looking. That’s a good definition of what it means to be ethical. I think there’s so many different directions to think about this in our industry and I think broadly speaking, let’s just take it one step at a time. I think one area where we’ve had challenging and questionable behavior is utilizing wage arbitrage to take advantage of technologists in lower wage working environments. That definitely seems to be changing as the living standards and incomes of those individuals increase seeing more and more fully autonomous teams in so-called offshore locations.

And I think that’s a broadly good trend, that there still is certainly some, I think, unsavoury behaviour where offshore the product and fire the team in the high cost location or make the team the lower cost location work unreasonable hours or put them under unreasonable pressure. I think that still happens, but at least from what I can see, that’s starting to tail off simply because businesses tend to care about cost, and as cost equalizes, then we start to see the ethical behaviors improve. So I think that’s on a good trend.

On the other hand, I am very concerned about the rise of the surveillance society and state. And I think whether that is talking about somebody tracking your browsing habits for whatever reason or your speech and your behaviors, I think that we are in a very murky time for that type of ethical behavior currently.

And I’m quite concerned about that more broadly, whether we’re talking about nation states or individuals. And we’ve certainly seen some very interesting behaviors, whether it’s Twitter being purchased and turned into X by a self-avowed free speech maximalist or other behaviors in terms of closing down the ability to communicate and speak freely. So I think there’s definitely concerns in terms of individual freedoms and surveillance that I don’t see a clear point of view on today. And I think we should be concerned, we should be active individually as leaders to push in the right direction and to demand good behavior. So I think we can go on for many hours on this topic, Shane, but those are the two, are we treating people decently that are doing the work? And by decently does that mean giving autonomy, giving support, giving the capability to contribute in a meaningful way?

I think that’s broadly improving. And even now we see countries that were previously not locations for offshoring that are coming online, whether it’s Thailand or the Philippines or other places, and they seem to be experiencing very rapid wage growth and leveling off. And I don’t think the AI and ML progress will have a massive indicator, and I hope I’m not proven wrong. I think that’s a good direction. On the other hand, I think the continued low cost of storage of information and the continued censorizing of both the built and digital world and the purposes that people have put that information to should concern us all and we should be active and thoughtful about how we as individuals, corporations, and the governments that should serve us are dealing with that.

Shane Hastie: Jeremiah, thank you. Very interesting wide-ranging conversation, and I appreciate your willingness to go where it goes. If people want to continue the conversation, where do they find you?

Jeremiah Stone: Thank you. I have enjoyed the conversation and yes, it was definitely wide-ranging but enjoyable for my part as well. I can be found on Twitter @jeremiahstone on X, guess we’re meant to call it now, or can hit me up on LinkedIn, and more than happy to connect and collaborate. Those are two easy ways to find me … I’m active on. And so I’m grateful for the opportunity to come on today and enjoy the conversation you continue to drive with the pod. Thank you.

Mentioned

Spider benchmark that is run by Yale University

Jeremiah’s research into Large Language Models Trained on Code

Jeremiah on Twitter/X and LinkedIn

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JetBrains Launches IntelliJ-Based Writing Tool WriterSide

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

With WriterSide, JetBrains aims to allow developers and writers to create technical documentation using a write, test, build workflow. The new tool is based on IntelliJ-platform IDEs and has been used to create most of JetBrains products’ documentation for the last few years.

This project developed out of hundreds of customer interviews and 10+ years of working on the IntelliJ Platform documentation. These experiences gave us a long list of features to build and problems to solve.

One of the key goals of WriterSide is enabling effective collaboration among developers and writers and bridging the gap between them. It enforces a “doc as code” pipeline based on development tools like Git, pull reviews, and automated checks to make it possible for the whole team to contribute, review, and track changes as they are used to do with code. This, according to JetBrains, simplifies the documentation pipeline.

WriterSide supports both Markdown and a custom XML-based markup and allows authors to combine both formats in the same document. This makes it possible to inject semantic attributes into a Markdown document to enrich it, for example. Conversely, WriterSide also supports translating Markdown fragments into XML on the fly and import them into an XML document.

WriterSide is able to perform over 100 built-in tests, such as tests for broken links, missing resources, incorrect attribute values, and so on. Tests can be customized for in-house spelling and style conventions. The tool also offer predefined designs that can be easily customized so authors will not need to deal with layout and CSS.

Since authors are required to write XML or Markdown, WriterSide includes a live preview feature that reflects instantly every change and highlights any errors. This is especially useful when there is no access to the CI/CD pipeline or for longer builds, so it is possible to check the output without waiting for them to finish.

As a final note about WriterSide features, it provides an AI-based spellchecker and a grammar correction built by JetBrains supporting over 25 languages.

WriterSide is available as a plugin for JetBrains IDEs and as a stand-alone application under an Early Access Program (EAP). The product is completely free for the duration of the EAP, but JetBrains promised they will have a free version or an ongoing EAP to allow using the tool for free.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Are Computer and Technology Stocks Lagging Lam Research (LRCX) This Year?

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

For those looking to find strong Computer and Technology stocks, it is prudent to search for companies in the group that are outperforming their peers. Lam Research (LRCX) is a stock that can certainly grab the attention of many investors, but do its recent returns compare favorably to the sector as a whole? By taking a look at the stock’s year-to-date performance in comparison to its Computer and Technology peers, we might be able to answer that question.

Lam Research is one of 628 individual stocks in the Computer and Technology sector. Collectively, these companies sit at #5 in the Zacks Sector Rank. The Zacks Sector Rank gauges the strength of our 16 individual sector groups by measuring the average Zacks Rank of the individual stocks within the groups.

The Zacks Rank is a successful stock-picking model that emphasizes earnings estimates and estimate revisions. The system highlights a number of different stocks that could be poised to outperform the broader market over the next one to three months. Lam Research is currently sporting a Zacks Rank of #2 (Buy).

The Zacks Consensus Estimate for LRCX’s full-year earnings has moved 4% higher within the past quarter. This shows that analyst sentiment has improved and the company’s earnings outlook is stronger.

Based on the most recent data, LRCX has returned 39.2% so far this year. In comparison, Computer and Technology companies have returned an average of 28.2%. This means that Lam Research is outperforming the sector as a whole this year.

One other Computer and Technology stock that has outperformed the sector so far this year is MongoDB (MDB). The stock is up 66.3% year-to-date.

For MongoDB, the consensus EPS estimate for the current year has increased 22.6% over the past three months. The stock currently has a Zacks Rank #2 (Buy).

Breaking things down more, Lam Research is a member of the Semiconductor Equipment – Wafer Fabrication industry, which includes 4 individual companies and currently sits at #197 in the Zacks Industry Rank. On average, stocks in this group have gained 19.6% this year, meaning that LRCX is performing better in terms of year-to-date returns.

On the other hand, MongoDB belongs to the Internet – Software industry. This 148-stock industry is currently ranked #62. The industry has moved +37% year to date.

Investors with an interest in Computer and Technology stocks should continue to track Lam Research and MongoDB. These stocks will be looking to continue their solid performance.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

Lam Research Corporation (LRCX) : Free Stock Analysis Report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Mongodb Inc (MDB) Up 2.04% in Premarket Trading – InvestorsObserver

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

News Home

Friday, October 27, 2023 08:02 AM | InvestorsObserver Analysts

Mentioned in this article

Mongodb Inc (MDB) Up 2.04% in Premarket Trading

Mongodb Inc (MDB) is up 2.04% today.

Overall Score - 57
MDB has an Overall Score of 57. Find out what this means to you and get the rest of the rankings on MDB!

MDB stock closed at $327.33 and is up $6.67 during pre-market trading. Pre-market tends to be more volatile due to significantly lower volume as most investors only trade between standard trading hours.

MDB has a roughly average overall score of 57 meaning the stock holds a better value than 57% of stocks at its current price. InvestorsObserver’s overall ranking system is a comprehensive evaluation and considers both technical and fundamental factors when evaluating a stock. The overall score is a great starting point for investors that are beginning to evaluate a stock.

MDB gets a average Short-Term Technical score of 60 from InvestorsObserver’s proprietary ranking system. This means that the stock’s trading pattern over the last month have been neutral. Mongodb Inc currently has the 49th highest Short-Term Technical score in the Software – Infrastructure industry. The Short-Term Technical score evaluates a stock’s trading pattern over the past month and is most useful to short-term stock and option traders.

Mongodb Inc’s Overall and Short-Term Technical score paint a mixed picture for MDB’s recent trading patterns and forecasted price.

Click Here To Get The Full Report on Mongodb Inc (MDB)

You May Also Like

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Agile Rehab: Replacing Process Dogma with Engineering to Achieve True Agility

MMS Founder
MMS Bryan Finster

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • We saw negative outcomes from agile scaling frameworks. Focusing on “Why can’t we deliver today’s work today?” forced us to find and fix the technical and process problems that prevented agility.
  • We couldn’t deliver better by only changing how development was done. We had to restructure the organization and the application.
  • The ability to deliver daily improved business delivery and team morale. It’s a more humane way to work.
  • Optimize for operations and always use your emergency process to deliver everything. This ensures hotfixes are safe while also driving improvement in feature delivery.
  • If you want to retain the improvement and the people who did it, measure it to show evidence when management changes, or you might lose it and the people.

Struggling with your “agile transformation?” Is your scaling framework not providing the outcomes you hoped for? In this article, we’ll discuss how teams in a large enterprise replaced heavy agile processes with Conway’s Law and better engineering to migrate from quarterly to daily value delivery to the end users.

Replacing Agile with Engineering

We had a problem. After years of “Agile Transformation” followed by rolling out the Scaled Agile Framework, we were not delivering any better. In fact, we delivered less frequently with bigger failures than when we had no defined processes. We had to find a way to deliver better value to the business. SAFe, with all of the ceremonies and process overhead that comes with it, wasn’t getting that done. Our VP read The Phoenix Project, got inspired, and asked the senior engineers in the area to solve the problem. We became agile by making the engineering changes required to implement Continuous Delivery (CD).

Initially, our lead time to deliver a new capability to the business averaged 12 months, from request to delivery. We had to fix that. The main problem, though, is that creating a PI plan for a quarter, executing it in sprints, and delivering whatever passes tests at the end of the quarter ignores the entire reason for agile product development: uncertainty.

Here is the reality: the requirements are wrong, we will misunderstand them during implementation, or the end users’ needs will change before we deliver. One of those is always true. We need to mitigate that with smaller changes and faster feedback to more rapidly identify what’s wrong and change course. Sometimes, we may even decide to abandon the idea entirely. The only way we can do this is to become more efficient and reduce the delivery cost. That requires focusing on everything regarding how we deliver and engineering better ways of doing that.

Why Continuous Delivery?

We wanted the ability to deliver more frequently than 3-4 times per year. We believed that if we took the principles and practices described in Continuous Delivery by Jez Humble and Dave Farley seriously, we’d be able to improve our delivery cadence, possibly even push every code commit directly to production. That was an exciting idea to us as developers, especially considering the heavy process we wanted to replace.

When we began, the minimum time to deliver a normal change was three days. It didn’t matter if it was a one-line change to modify a label or a month’s worth of work — the manual change control process required at least three days. In practice, it was much worse. Since the teams were organized into feature teams and the system was tightly coupled, the entire massive system had to be tested and released as a single delivery. So, today’s one-line change will be delivered, hopefully, in the next quarterly release unless you miss merge week.

We knew if we could fix this, we could find out faster if we had quality problems, the problems would be smaller and easier to find, and we’d be able to add regression tests to the pipeline to prevent re-occurrence and move quickly to the next thing to deliver. When we got there, it was true. However, we got something more.

We didn’t expect how much better it would be for the people doing the work. I didn’t expect it to change my entire outlook on the work. When you don’t see your work used, it’s joyless. When you can try something, deliver it, and get rapid feedback, it brings joy back to development, even more so when you’ve improved your test suite to the point where you don’t fear every keystroke. Getting into a CD workflow made me intolerant of working the way we were before. I feel process waste as pain. I won’t “test it later when we get time.” I won’t work that way ever again. Work shouldn’t suck.

Descale and Decouple for Speed

We knew we’d never be able to reach our goals without changing the system we were delivering. It was truly monstrous. It was the outcome of taking three related legacy systems and a fourth unrelated legacy system and merging them, with some splashy new user interfaces, into a bigger legacy system. A couple of years before this improvement effort, my manager asked how many lines of code the system was. Without comments, it was 25 million lines of executable code. Calling the architecture “spaghetti” would be a dire insult to pasta. Where there were web services, the service boundaries were defined by how big the service was. When it got “too big,” a new service, Service040, for example, would be created.

We needed to break it up to make it easier to deliver and modernize the tech stack. Step one was using Domain Driven Design to start untangling the business capabilities in the current system. We aimed to define specific capabilities and assign each to a product team. We knew about Conway’s Law, so we decided that if we were going to get the system architecture we needed, we needed to organize the teams to mirror that architecture. Today, people call that the “reverse Conway maneuver.”  We didn’t know it had a name. I’ve heard people say it doesn’t work. They are wrong. We got the system architecture we wanted by starting with the team structure and assigning each a product sub-domain. The internal architecture of each team’s domain was up to them. However, they were also encouraged to use and taught how to design small services for the sub-domains of their product.

We also wanted to ensure every team could deliver without the need to coordinate delivery with any other team. Part of that was how we defined the teams’ capabilities, but having the teams focus on Contract Driven Development (CDD) and evolutionary coding was critical. CDD is the process where teams with dependencies collaborate on API contract changes and then validate they can communicate with that new contract before they begin implementing the behaviors. This makes integration the first thing tested, usually within a few days of the discussion. Also important is how the changes are coded.

The consumer needs to write their component in a way that allows their new feature to be tested and delivered with the provider’s new contract but not activated until that contract is ready to be consumed. The provider needs to make changes that do not break the existing contract. Working this way, the consumer or provider can deliver their changes in any order. When both are in place, the new feature is ready to release to the end user.

By deliberately architecting product boundaries, the teams building each product, focusing on evolutionary coding techniques, and “contract first” delivery, we enabled each team to run as fast as possible. SAFe handles dependencies with release trains and PI planning meetings. We handled them with code. For example, if we had a feature that also required another team to implement a change, we could deploy our change and include logic that would activate our feature when their feature was delivered. We could do that either with a configuration change or, depending on the feature, simply have our code recognize the new properties in the contract were available and activate automatically.  

Accelerating Forces Learning

It took us about 18 months after forming the first pilot product teams to get the initial teams to daily delivery. I learned from doing CD in the real world that you are not agile without CD. How can you claim to be agile if it takes two or more weeks to validate an idea? You’re emotionally invested by then and have spent too much money to let the idea go.

You cannot execute CD without continuous integration (CI). Because we took CI seriously, we needed to make sure that all of the tests required to validate a change were part of the commit for that change. We had to test during development. However, we were blocked by vague requirements. Focusing on CI pushed the team to understand the business needs and relentlessly remove uncertainty from acceptance criteria.

On my team, we decided that if we needed to debate story points, it was too big and had too much uncertainty to test during development. If we could not agree that anyone on the team could complete something in two days or less, we decomposed the work until we agreed. By doing this, we had the clarity we needed to stop doing exploratory development and hoping that was what was being asked for. Because we were using Behavior Driven Development (BDD) to define the work, we also had a more direct path from requirement to acceptance tests. Then, we just had to code the tests and the feature and run them down the pipeline.

You need to dig deep into quality engineering to be competent at CD. Since the CD pipeline should be the only method for determining if something meets our definition of “releasable,” a culture of continuous quality needs to be built. That means we are not simply creating unit tests. We are looking at every step, starting with product discovery, to find ways to validate the outcomes of that step. We are designing fast and efficient test suites. We are using techniques like BDD to validate that the requirements are clear. Testing becomes the job. Development flows from that.

This also takes time for the team to learn, and the best thing to do is find people competent at discovery to help them design better tests. QA professionals who think, “What could go wrong?” and help teams create strategies to detect that, instead of the vast majority who are trained to write test automation, are gold. However, under no circumstances should QA be developing the tests because they become a constraint rather than a force multiplier. CD can’t work that way.

The most important thing I learned was that it’s a more humane way of working. There’s less stress, more teamwork, less fear of breaking something, and much more certainty that we are probably building the right thing. CD is the tool for building high-performing teams.

Optimize for Operations

All pipelines should be designed for operations first. Life is uncertain — production breaks. We need the ability to fix things quickly without throwing gasoline on a dumpster fire. I carried a support pager for 20 years.  The one thing that was true for most of that time was that we always had some workaround process for delivering things in an emergency. This means that the handoffs we had for testing for normal changes were bypassed for an emergency. Then, we would be in a dumpster fire, hoping our bucket contained water and not gasoline.

With CD, that’s not allowed. We have precisely one process to deliver any change: the pipeline. The pipeline should be deterministic and contain all the validations to certify that an artifact meets our definition of “releasable.” Since, as a principle, we never bypass or deactivate quality gates in the pipeline for emergencies, we must design good tests for all of our acceptance criteria and continue to refine them to be fast, efficient, and effective as we learn more about possible failure conditions. This ensures hotfixes are safe while also driving improvement in feature delivery. This takes time, and the best thing to do is to define all of the acceptance criteria and measure how long it takes to complete them all, even the manual steps. Then, use the cycle time of each manual process as a roadmap for what to automate next.

What we did was focus on the duration of our pipeline and ensure we were testing everything required to deliver our product. We, the developers, took over all of the test automation. This took a lot of conversation with our Quality Engineering area since the pattern before was for them to write all the tests. However, we convinced them to let us try our way. The results proved that our way was better. We no longer had tests getting out of sync with the code, the tests ran faster, and they were far less likely to return false positives. We trusted our tests more every day, and they proved their value later on when the team was put under extreme stress by an expanding scope, shrinking timeline, “critical project.”

Another critical design consideration was that we needed to validate that each component was deliverable without integrating the entire system. Using E2E tests for acceptance testing is a common but flawed practice. If we execute DDD correctly, then any need to do E2E for acceptance testing can be viewed as an architecture defect. They also harm our ability to address impacting incidents.

For example, if done with live services, one of the tests we needed to run required creating a dummy purchase order in another system, flowing that PO through the upstream supply chain systems, processing that PO with the legacy system we were breaking apart, and then running our test. Each test run required around four hours. That’s a way to validate that our acceptance tests are valid occasionally, but not a good way to do acceptance testing, especially not during an emergency. Instead, we created a virtual service. That service could return a mock response when we sent a test header so we could validate we were integrating correctly. That test required milliseconds to execute rather than hours.

We could run it every time we made a change to the trunk (multiple times per day) and have a high level of confidence that we didn’t break anything. That test also prevented a problem from becoming a bigger problem. My team ran our pipeline, and that test failed. The other team had accidentally broken their contract, and the test caught it within minutes of it happening and before that break could flow to production. Our focus on DDD resulted in faster, more reliable tests than any of the attempts at E2E testing that the testing area attempted. Because of that, CD made operations more robust.

Engineering Trumps Scaling Frameworks

We loved development again when we were able to get CD off the ground. Delivery is addictive, and the more frequently we can do that, the faster we learn. Relentlessly overcoming the problems that prevent frequent delivery also lowers process overhead and the cost of change. That, in turn, makes it economically more attractive to try new ideas and get feedback instead of just hoping your investment returns results. You don’t need SAFe’s PI planning when you have product roadmaps and teams that handle dependencies with code. PI plans are static.

Roadmaps adjust from what we learn after delivering. Spending two days planning how to manage dependencies with the process and keeping teams in lock-step means every team delivers at the pace of the slowest team. If we decouple and descale, teams are unchained. Costs decrease. Feedback loops accelerate. People are happier. All of these are better for the bottom line.

On the first team where we implemented CD, we improved our delivery cadence from monthly (or less) to several times per day. We had removed so much friction from the process that we could get ideas from the users, decompose them, develop them, and deliver them within 48 hours. Smaller tweaks could take less than a couple of hours from when we received the idea from the field. That feedback loop raised the quality and enjoyment level for us and our end users.

Measure the Flow!

Metrics is a deep topic and one I talk about frequently. One big mistake we made was not measuring the impact of our improvements on the business. When management changed, we didn’t have a way to show what we were doing was better. To be frank, the new management had other ideas – poorly educated ideas. Things degraded, and the best people left. Since then, I’ve become a bit obsessed with measuring things correctly.

For a team wanting to get closer to CD, focus on a few things first:

  1. How frequently are we integrating code into the trunk? For CI, this should be at least once per day per team member on average. Anything less is not CI. CI is a forcing function for learning to break changes into small, deliverable pieces.
  2. How long does it take for us, as a team, to deliver a story? We want this to be two days maximum. Tracking this and keeping it small forces us to get into the details and reduce uncertainty. It also makes it easy for us to forecast delivery and identify when something is trending later than planned. Keep things small.
  3. How long does it take for a change to reach the end of our pipeline? Pipeline cycle time is a critical quality feedback loop and needs to keep improving.
  4. How many defects are reported week to week? It doesn’t matter if they are implementation errors, “I didn’t expect it to work this way,” or “I don’t like this color.” Treat them all the same. They all indicate some failure in our quality process. Quality starts with the idea, not with coding.

Since this journey, I’ve become passionate about continuous delivery as a forcing function for quality. I’ve seen on multiple teams in multiple organizations what a positive impact it has on everything about the work and the outcomes. As a community, we have also seen many organizations not take a holistic approach, throw tools at the problem, ignore the fact that this is a quality initiative, and hurt themselves. It’s important that people understand the principles and recommended practices before diving in head first.

You won’t be agile by focusing on agile frameworks. Agility requires changing everything we do, beginning with engineering our systems for lower delivery friction. By asking ourselves, “Why can’t we deliver today’s work today?” and then relentlessly solving those problems, we improve everything about how we work as an organization. Deploy more and sleep better.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: WebGPU is Not Just About the Web

MMS Founder
MMS Elie Michel

Article originally posted on InfoQ. Visit InfoQ

Transcript

Michel: This is a classroom in 2020. Basically, a bunch of white names and a black background as a teacher is speaking in the void. This was a practical session, even though it wasn’t that practical, and the students were working on a computer graphics assignment. You see the people raising hands, these are Mac users. They all have a problem here. They have a problem because when I wrote the code base, students were all using the university computers, which were not Mac. That’s when I realized that I actually had to backport live the whole code base to an older version of OpenGL because macOS does not support the last version. It’s even actually deprecating the new versions of OpenGL. That was a hard way to learn portability, that portability is important. That portability is also not really easy when it comes to graphics programming, even for native graphics programming. OpenGL was long, a good API for doing portable graphics, but now it’s no longer the case. We’ll first see how WebGPU is actually a good candidate to replace this, even for native GPU programming, not just for web. Then I’m going to show a more detailed and more practical introduction of how to get started using WebGPU for native developments using C++ in that case. I’m going to finish with a point about whether it’s too early or not to adopt WebGPU because it’s still ongoing, the design process is still unfinished. It might be too early or not. I think it’s ok but we’ll see more in details.

Background

I do a lot of prototypes. I’m a researcher in computer graphics. In a lot of cases, I have to write from scratch little applications using real-time graphics, using GPU programming, so I often restart and always wonder whether the solution I was adopting the last time is still the most relevant one. There’s a need to iterate really faster so it’s not always a good option to go too low level, but I still need a lot of control on what I’m doing. I think this is quite representative of a lot of needs. The other interesting part is that I also teach these computer graphics. For this, I’m always wondering, what’s the best way to get started with GPU programs either for 3D graphics, or just for all the other things that a GPU can actually do nowadays, it’s not limited to 3D graphics. For both of those reasons, I got interested in WebGPU and started writing this guide of how to use WebGPU, not especially for the web, but more importantly, more focused on desktop programming. Because that’s what I was mainly focused on, even though as a side effect it will enable you to use this on the web once the WebGPU API is officially released in web browsers.

Graphics APIs

Our first point, graphics API. Graphics API, like OpenGL, WebGPU, the first question is actually, why do we need such a thing to make 3D or to access the GPU? Then, what are the options? What is the problem? Why do we need a graphics API? I’m first going to assume that we actually need to use the GPU otherwise, indeed, we don’t need a graphics API. Why could we need to use the GPU? Because that’s a massively parallel computing device and a lot of problems are actually massively parallelizable. That’s the case of 3D graphics. That’s also the case of a lot of simulations, physics simulations. That’s the case of neural network training and evaluation. That’s the case for some cryptographic problems as well. The other hypothesis I’m making is that we intend to have our code portable to multiple platforms, because if we don’t, we’ll make different choices and focus really under the target platform.

Why do we need a graphics API? Because the GPU is like another machine in the machine. It has its own memory. It’s a completely different device, and actually communicates with the CPU just through a PCI Express connection, but they all have their life. It’s really like a remote machine. Think of it as if it was a server and the CPU is the client if you’re used to the web, for instance. They’re far away and communicating between those takes time. Everything we’re doing when we’re doing programming in languages like C++ here, but also most of other languages is describing the behavior of the CPU. If we want to run things in the GPU, we will actually have to talk with the driver to communicate to the GPU what we expect it to be doing. That’s to handle this communication that we need to the graphics API.

What are the options? There are a lot of different ways of doing this. There are some options provided directly by the manufacturers of the GPU, of the device itself, some other are provided by the operating system. Last, we’ll see portable APIs that try to be standard implemented by multiple drivers, multiple OSes. The vendor specific APIs, I’m not going to focus too much on these. For example, there is CUDA that is an NVIDIA specific API for general-purpose GPU programming. I’m also briefly mentioning Mantle, it’s no longer a thing, but it was at the base of Vulkan. When it started, it was an AMD specific API. For OS specific APIs, the well-known one is Direct3D. That’s been around for a lifetime, even though each different version of Direct3D is really different from the previous one. Direct3D 12 is really different from Direct3D 11, for instance, that was also different from the previous ones. That’s what is used. That’s the native graphics API for Windows and for the Xbox as well. If you target only those platforms, then just go for the DirectX API.

There is Metal for macOS and iOS. Same thing, but for Apple devices. It’s not that old, actually. It’s much more recent than DirectX. I’m briefly mentioning the fact that if you go to other kinds of devices, especially gaming consoles, you will also have some specific APIs for these that are provided by the platform. I’m mostly going to focus on desktop, because, often, we want something to be portable so that we don’t want to rewrite for those different APIs. We need portable APIs. For a long time, the portable API for 3D was OpenGL. OpenGL that also had some variants. One is OpenGL ES that was to target low-end devices like phones in particular. WebGL, which is basically a mapping of OpenGL ES to be exposed as a JavaScript API. There is also OpenCL, that was more focused on general-purpose GPU programming. OpenGL was focused on 3D graphics, OpenCL on non-3D related things. Actually, since with time both of those, the compute pipeline and the render pipeline actually got much closer, it’s really common to use both in the same application. It no longer made sense to have two different APIs, in my opinion. The new version of OpenGL or what was supposed to be the new version of OpenGL, it’s really different from where it was because OpenGL was always backward compatible with the previous versions. The new API that totally broke this and went much more low level is Vulkan. Vulkan is supposed to be as portable as OpenGL, but the thing is that Apple didn’t really adopt it because they wanted to focus on Metal and they didn’t want to support Vulkan. That’s why actually it becomes a bit complicated now to have a future-proof, portable way of communicating with the GPU. That’s where WebGPU, which is also a portable API, because it’s an API developed for the web, that must be portable, becomes really promising.

How do we get a truly portable API? That’s what I was just mentioning. If you look at some compatibility table, you see there’s actually no silver bullet. No API that will run on everything. Assuming we’re not using WebGPU yet, because that’s something that the developers of WebGPUs themselves were wondering, could we do like for WebGL, just map an existing API and have it exposed in JavaScript? Couldn’t really decide on one. The choice that had been made for WebGL was to take the common denominator, but that limits to the capabilities of the lowest end device. If we want access to more performance and more advanced features of the GPU, we need something different. It was a question of whatever the draft of WebGPU is, how exactly will the browser developers implement it in practice? That’s actually a similar question to the one we’re wondering if we do desktop GPU programming, how do we develop something that is compatible with all the platforms?

What they ended up deciding for the case of the browser was to actually duplicate the development efforts. On top, there is this WebGPU abstraction layer, this WebGPU API that has multiple backends. That’s valid both for the Firefox implementation and the Chrome implementation. They all have different low-level APIs behind depending on the platform it’s running in. Since this is some effort, they actually thought about sharing this effort, making it reusable, even when what we’re doing is not JavaScript programming. If we’re doing just native programming, we can actually use WebGPU, because the different implementations of WebGPU are on the course of deciding on the common header webgpu.h that will be a way to expose to what are native code, the WebGPU backend, and then indirectly to use either Metal, DirectX, Vulkan, or others, depending on the platform. As a user of the WebGPU API, you don’t have to care that much about it. That’s what is really interesting here with WebGPU, and why we can really consider it as a desktop API for graphics as well. This model of having a layer that abstracts the differences between the backend is not something new, it’s actually something that is done in other programs because this problem that we were facing at the beginning, a lot of real-life applications are facing it. Just taking a few examples, here for Unreal Engine, there is this render hardware interface that abstracts the different possible backends. Here is another example in Qt. Qt is doing user interfaces so it also needs to talk with different low-level graphics APIs, so it also has such RHI. One last example is a library developed by NVIDIA, even though I don’t know how much it’s used in practice, but it was also facing the same need and also proposing for others to reuse it.

A question is, among all those different render hardware interfaces, which one should we use? Would it be WebGPU, or this NVIDIA version, or the Unreal version, or the Qt version? A lot of them have been developed with a specific application in mind. Thinking of Qt and Unreal, for instance, they will have some bias. They will not be fully agnostic in the application. They’re really good for developing a game engine or for developing a UI framework. If you’re not sure, if you want to learn an interface that will be reusable in a lot of different scenarios, then it should be something that is more agnostic. Then, WebGPU seems to be a really good choice because it will be used by a lot of people since it will be the web API as well. There will be a lot of documentation and a powerful user base. That’s why I think it’s interesting to focus on WebGPU for this even though there exist other render hardware interfaces.

I’m just going to finish this part with a slight note about how things changed. If we compare it to the case of OpenGL, the driver had a lot of responsibility. That’s why actually, at some point, OpenGL became intractable. It was asking too much for the driver. Especially to enter the backward compatibility, the driver still has to ensure that they support OpenGL 1.0, which is really old and no longer in line with the current architecture of modern GPUs. With this model of developing RHI, the thing is that we decoupled the low-level driver part from the user facing interface that is developed potentially by different teams, because the driver here would be developed by AMD or NVIDIA, while the library behind the render hardware interface is developed by Firefox team or the Chrome team. It’s a powerful way of sharing the development effort. A lot of modern choices for GPU programming will go in this direction.

If I recap, one other thing, if we need portability and performance at the same time, we need to use some kind of RHI because, otherwise, we have to code the same thing multiple times. WebGPU is a good candidate for this because it doesn’t need an RHI. Since it’s domain agnostic, it has a lot of potential of being something that you can learn and reuse in a lot of different contexts. It’s likely future proof because it’s developed to last and it’s developed to be maintained for a long time by powerful actors, namely the web browsers. I think it’s a good bet. Plus, you get some bonus with this. The first one is I find it a reasonable level of abstraction. It’s organized in a more modern way than OpenGL. If you compare it to Vulkan, for instance, it is higher level, so it doesn’t have thousands of lines of code, Hello Triangle. I really like using it so far. Also, since there are two concurrent implementations of this same API, we can expect some emulation here. Chrome and Firefox are challenging, and they will likely get something really low overhead in the end.

WebGPU Native – How to Get Started

We can go now to the second part, if you want to do development with WebGPU now, how do we get started? First thing is to build a really small Hello World. Then I’m going to show the application skeleton. I’m not going to go into details of all the parts because there is the guide I’ve started writing to go further. Instead, I’m going to focus in a bit on how to debug things, because that’s also really important to be able to debug what we’re doing when we start with a new tool, especially since there is this dual timeline with the CPU on one hand, the GPU on the other hand, it can become tricky. How do we get started? It starts by just including this WebGPU header, of course. Then we can create the WebGPU instance. We don’t need any context. We don’t need a window for instance, to create this. One thing that we can see already is that when we create objects in WebGPU, it always looks the same. There is this create something function that takes a descriptor that is a structure that holds all the possible arguments for this object creation, because there can be a lot in some cases. Instead of having a lot of arguments in the create instance function, it’s all hidden in the descriptor for which you can set some default values in the utility function for instance. Then, you can check for errors and just display, that’s really minimal Hello World.

Here, we see two things, as I was saying there is the descriptor. Keep in mind that there is always in descriptors, this nextInChain field that is dedicated to extension mechanisms that you have to set up to null pointer. The create instance function returns a blind handle, it’s a pointer basically. It’s really something that you can copy around without worrying about the cost of copying it because it’s just a number. There are also two other types that are manipulated during the API, some Enum values and some structs that are just used to detail the descriptor, so to organize the things in the descriptor. It’s really simple. There’s descriptors, handles, and just Enum and Struct that are details actually. All the handles represent objects that are created with a create something and there is always a descriptor associated. Simple.

How do we build this in practice? That’s where it becomes tricky. Not that tricky, but not obvious, because the first thing is, where is this header with the webgpu.h? Also, if we have a header that declares symbols, they must be defined somewhere, so how do we link to an implementation of WebGPU? As I was mentioning, contrary to other APIs, those are not provided by the driver. We have multiple choices. Either we use wgpu-native, that is based on the Firefox backend, or we use Dawn which is the Chrome backend. The last possibility is a bit different, but we could also just use Emscripten that provides the webgpu.h header, but doesn’t provide any symbol because it doesn’t need to. It will just map the calls to the WebGPU API to JavaScript calls once it transpiles the code to WebAssembly. I’m going to rule out Emscripten, because it’s a bit different, but keep in mind that it’s always a possibility. Something that I wanted when I started working on this was, I would like to have an experience that is similar to what I have when I do OpenGL, when I use something like Glad. It’s a really small tool. I can just download something that is not too big, and I link my application. It doesn’t take time to build. It doesn’t require a complicated setup. It doesn’t invade my build system. I was looking for this and I started comparing the wgpu-native and Dawn. First, wgpu is Rust based, which is great, but easy to build from scratch if I’m doing a C++ project. That was something that was a bit of a concern for me at this point. Then I looked at Dawn. The thing is Dawn doesn’t have a standard build system, neither, it really needs some extra tools. It also needs Python, because it autogenerates some part of the code. It was not obvious neither how to build from scratch. I’m not saying it’s not possible, it’s not really easy to do. From the perspective of sharing this with students, for instance, or with people who are not experts in build systems and don’t want to have something complicated, that was also a concern.

I’ve started working on stripping down a build of Dawn after generating a Python based part. It wasn’t so straightforward. In the remaining, I provide a version that I’m satisfied with because it doesn’t need any extra tool besides CMake and Python. There’s still this extra Python dependency, but no longer the need for depot_tools and can just be included using fetch content, for instance, or as a sub-model. Still, it was taking time to build because that’s actually a big project. There’s a lot of lines of code because it’s a complicated thing to have all those different backends. I kept on looking on ways to make this initial build for the end user much faster. One possibility was to provide a pre-built version. I was thinking maybe a build based in Zig that already had some scripts set up to prebuild their static libraries. There was no MSVC version in this. I’m just providing the link in case you’re interested, https://github.com/hexops/mach-gpu-dawn. Since I started thinking about using a prebuilt version, I actually realized the fact that wgpu’s Rust base is no longer an issue in this scenario. I was happy to see that these projects provide autogenerated builds at a regular pace.

The only thing was that there were some issues with naming and some binaries were missing some time, so I had to curate it a little bit. That’s why I ended up actually repackaging the distributions of WebGPU. I did two. One based on wgpu-native, the other one on Dawn so that we can compare equally. The goal was to have a clean CMake integration, something that I add the distribution, and then I just link my application to the WebGPU target and it just works. I wanted also this integration to be compatible with emcmake, to use Emscripten. Not have too many dependencies. I didn’t want depot_tools to be a dependency for the end user. I wanted those distributions to be really interchangeable, so that at any point in time, if I want to switch from wgpu-native backend to the Dawn backend, because I want to compare for benchmarking reasons, for instance, or because they’re not fully in line and I prefer the debug messages from Dawn but it was faster to develop at first with wgpu-native, that’s always something we can do with this way of packaging in the same way as the application. I did this for both. I did define it to be able to see the difference, because there are still some things, it’s still a work in progress, the WebGPU specification, some things are still different. In the long run, I knew this would not be needed, I would not have to share myself, a repackaged version. For the time, before it’s fully solved, this is something that can be quite helpful.

If we come back to this initial question of how to build this tool, this Hello World, we’re going to just download one of the distributions, any one of them, could be next to our main.cpp and add this really simple CMakeLists file, and we can build. Nothing crazy here, it’s just showing the address of the instance pointer. Since it’s not null, it means it worked. We’re able to use WebGPU, so what will the application look like? There will first be the initialization of the device, I’m going to focus a bit on this afterwards. Then we typically load the resources. This is allocating memory and copying things from the CPU to the GPU memory, textures, buffers, also shaders, which are programs executed on the GPU. Then we can initialize the set of the bindings that tells the GPU how to access the different resources because there are really different natures of memory in the GPU memory, so that it can be accessed through optimized path, depending on the usage we’re doing. This is something quite different with the way that CPU memory works. That plays an important role in the performance of the GPU programming.

Then you set up pipelines. This is something also specific to the GPU, because GPU has this weird mix of fixed stage where some operations are really hard coded in the hardware itself. You can only tune the pipeline by changing some options, but not really program it. There are also stages that are programmable, like the vertex shader, the fragment shader, and the compute shader if you’re using a compute pipeline. The pipeline object is something that basically saves the state of all the options that we set up for both fixed and programmable stages. Important that the shaders are using a new shading language WGSL, which is neither GLSL nor HLSL nor SPIR-V even though for now, it is still possible to use SPIR-V shaders even in the web browser. In the long run, this will be dropped, so make the move to learn WGSL. It’s not that different conceptually, even though it’s using a different syntax from the traditional shader languages.

Then comes the core of the application. You submit comments to the GPU, and the GPU is executing shaders and running pipelines. You can eventually fetch back some data from the GPU. I’m also going to detail this eventually. This is asynchronous, because, again, the communication between the VRAM and the RAM takes time. All the other operations that were issued on the CPU towards the GPU, it’s fire and forget. You say, do something, and when the function returns in the CPU side, it doesn’t mean that it’s done, it just means that the driver received the instruction and will eventually forward it to the GPU. This is not a problem until you actually want to get some results back from the GPU, and that’s where some asynchronicity is needed. At the end, of course, you clean up, which is still a problem now, because there is a key difference between wgpu and Dawn implementations, when it comes to cleaning up resources. Optionally, if you’re doing graphic things, there is also the swap chain to set up, which will tell how the GPU basically communicates with the screen by presenting the frame buffer.

With all this, you can have this kind of application. Here is just an extract from the guide, where you load a 3D model and turn it around and have some really basic UI. There’s absolutely nothing here, but just to play with. Just for the case of graphics programming, really 3D rendering, I also had to develop some small libraries, one to communicate with GLFW, which is the library typically used to open a window. It’s just one file to add to your project. Also, here, I’m using the ImGui library for the user interface, which is really handy, but I had some issues with the WebGPU backend. There is this branch of mine that you can use. On all this skeleton, I’m going to focus on the device creation, because if you’re interested in targeting various types of devices and still benefit from their performance, there is something interesting going on here.

The device creation is split into two parts. One is the creation of the adapter, and then the creation of the device. The adapter is not really a creation. We get the adapter, which is the information about the contexts. From this we can know what are the hard limits and the capabilities of the device. From this, we can set up the virtual limits of the device objects that we’re using to abstract the adapter. This mechanism is important, because if we don’t set any limits, the device will use the same one as the adapter, which is really easy. You don’t have to do anything. It means that your application might fail anywhere in the code because you don’t really know about the limits. Suddenly, you’re maybe creating a new texture, and this texture is exceeding the maximum number of textures, and the code will fail. It might work on your machine because your adapter has different limits than the machine of your colleague that has a different GPU for instance. This is really not convenient and something really to avoid.

Instead, you really should specify limits when creating the device. This means it might fail the creation of the device but it will fail right away. You will know that just the device doesn’t support the requirements of your program. Stating the limits is stating the minimum requirements of a program. If you don’t want to fail, you can actually define multiple quality tiers with different limit presets. When you get the adapter, you look at the support limit. You stop at the first set that is fully supported. Then you set the device according to this, and you have maybe a global variable or an application attribute that says at any time at which quality tier you are, so that your application can adapt without breaking. A quick note for the initialization of the device in the case of the web target, it is handled on the JavaScript side, so instead of doing all this you will just use the emscripten_webgpu_get_device function that is provided by Emscripten, and do the initialization on the JavaScript side. Something to keep in mind for portability, if you want to target web of course.

Debugging

Last practical point on debugging. How do we debug applications using WebGPU? The first thing to go for is the error callbacks. There are two error callbacks that you can set up right after creating the device. Maybe the most important one for native programming is to set the UncapturedErrorCallback. This callback is basically the one through which all the errors will be sent. This is something you can set up. I recommend that you put a breakpoint in this so that your program will start whenever it encounters an error. You can first have a message and also inspect the stack. The thing is that the different implementations have really different error messages. Here, I’m just showing an example for wgpu-native. We can understand it, there’s no problem. If you compare it with the Dawn version, that is actually detailing all the steps through which it went before reaching this issue. Also, it’s using the labels because all the objects that you create in old descriptors, you can actually specify a label for the object so that it can be used at the beginning like this. There are really differences. Here another example on shader validation. Tint is the shader compiler for Dawn. Mega is the one for wgpu. They present errors in different ways. That could be a reason to choose Dawn over wgpu, even though the Dawn distribution takes time to compile because it’s not prebuilt.

Another thing that is really useful is graphics debuggers. That’s something that is not specific to WebGPU at all, but something that when you want to talk with the GPU, you need to use actually. Two key examples, RenderDoc and NVIDIA Nsight. The only thing about the graphics debugger is that it will actually show comments from the underlying low-level API and not comments from WebGPU. It doesn’t know about WebGPU because it intercepts what’s happening between the CPU and the GPU. At this stage, it’s all low-level API comments. In a lot of cases, just looking at the order of things, you can guess which command corresponds to which one in your code, for simple enough applications. If your applications become really complex, that might become an issue. Something that is more WebGPU aware is the dev tools of the browser itself, even though it’s not as feature full as the graphics debugger when it comes to GPU programming, of course. That sounds stupid, but I’m just mentioning when you debug, something that is important to look at is the documentation of the JavaScript spec. Even though you’re doing native, there’s not really a specification for the native API, so I end up using a lot of JavaScript spec to figure out how the native one works. Actually, if you’re interested in the specificities of the native API, just basically look at the header, and I’ll have a more advanced tip for this.

The last thing for debugging is profiling. This is something that is not obvious, because, again, when you’re doing GPU programming you have different timelines. The CPU is living in one side, the GPU on the other side. When the CPU issues commands for the GPU, it doesn’t really know when it starts, when it stops. You cannot use CPU side things to measure performance for the GPU. In theory, WebGPU provides something for this, which is timestamp queries. You create queries and you ask GPU to measure things and then send the result back to you. These are not implemented, and there is even a concern actually about whether it will be implemented because the timestamp queries can leak a lot of private information. Since the web context needs to be safe and not leak too much information about the clients, this might be a true virtual issue. In the long run, it would still be usable for native programming, but be disabled in the JavaScript API. Something you can use though, which is handy is the debug groups. Something that has no effect on your program itself but that will actually make you see in here in Nsight, some segment of performance, some timings that you can basically name in your C++ code. That’s really helpful for measuring things.

Just taking a little example here, since I’m talking about figures, I’m going to show some for comparing two algorithms for contouring a signed distance field. I’m comparing both algorithms, and I’m also comparing the wgpu-native and the Dawn version. I’d be careful with those figures. I didn’t carry an intensive benchmarking with a lot of different cases. It’s just one example. What we can see is that Dawn is a little bit slower, but it might be possible to accelerate it by disabling robustness features. I haven’t stress tested this. What’s interesting is actually, I think that those figures are quite similar, which shows that whether it’s wgpu-native or Dawn, they both talk with the GPU with such a low-level access, that they can actually reach similar performance, and that there is not so much overhead. That’s something that should be measured by actually comparing with a non-WebGPU implementation. I’m optimistic about this.

Is WebGPU Ready Enough (For Native Applications)?

To finish this talk, maybe the key question that remains is, is it ready enough to be used now for native application, for what concerns me? First thing is backend support is rather good, whether it’s for wgpu or Dawn. This is not a detailed table, but it at least shows that for modern desktops, it’s well supported. A few pain points that I should mention. One of them is asynchronous operations that I talked about for getting data back from the GPU, I still end up needing to wait with an infinite loop for some times, by submitting empty queues just for the backend to check whether the async operation is finished. There might be something unified to do this in the later version of the standard. Another thing is, I miss a bit shader model introspection, because here is the typical beginning of a shader. You define some variables and you tell by numbers, by figures like here the binding 0, binding 1, you tell how this will be filled in from the CPU side. Just communicating with numbers instead of just using the variable names makes the C++ side of the code a bit unclear sometimes. For the JavaScript version, there is an automatic layout that is able to automatically infer by looking at the code, the shader code, what the binding layout should look like. This has raised some issues, so it’s been removed from the native version for some good reasons.

Lastly, there are still a lot of differences between the implementation. I’m getting a lot of examples here. I’m just focusing on one key difference, which is there is no agreeing yet on the mechanism for releasing, freeing the memory after use, so wgpu uses Drop semantics, and Dawn, Release. What’s the difference? The difference is that Drop means these objects will never be used by anything, neither me nor anybody else, just destroy it. In the case of Dawn, the call to Release means I’m done with this, but if somebody else actually still uses it, then don’t free it. To go with this mechanism, there is also a reference that does the opposite of this, which is, ok, I’m going to use this so don’t free it until I release. That means Dawn actually exposes a reference counting capability feature that wgpu prefers to hide. The debate about which one should be adopted is still an ongoing debate.

Lastly, some limitations that might remain limitations even on the longer run. Here are a few questions, the timestamp queries. Again, on desktop, that should be possible to enable them. Maybe it’s even possible already. For the JavaScript side, that will be an issue. If the logic of your program is relying on timestamp queries to optimize some execution path, it will be a problem. Another question is, how does shader compilation time gets handled? Because in a complex application, compiling the shader can take multiple seconds, tens of seconds or even minutes. A desktop application, you would cache the compiled version, or the driver will do it for you. The question is, do we have some control in this cache, or how can we avoid the user experience where they reach to the page, to the web application they were used to? Suddenly it has to rebuild all the shaders for some reason. It’s taking too much time to start. Another open question, I didn’t look a lot into this, but I didn’t see anything related to tiled rendering, which is something specific to low-end devices that have a low memory bandwidth, and that might require some specific care. The question is, are we able to detect it in WebGPU so that we can adapt the code for this kind of low consumption device? I haven’t found anything related to this so far. I’m just providing a link to some other debates that are really interesting with all the arguments that can be used, https://kvark.github.io/webgpu-debate. One last thing, the extension mechanism could be used if you’re not targeting the web. Here’s an example.

Conclusion

The key question is, should I use WebGPU? In my context where I look for portability and a future proof API, and I was using OpenGL for this and it’s becoming problematic, then I would suggest, yes, go for it. You can already start looking at it. You can already start using it. Be ready to change some things in the coming months because maybe some issues will be fixed, some slight changes will occur, and it would break the existing code. At this point, no strong change should be expected. Learning it right now should be representative of what it will be in the end. I think it’s a good domain-agnostic render hardware interface. It’s not too low level, but it’s more in line with the way things are underneath, so it can be efficient even without being too low level. It’s future oriented, because it’s going to be the API of the web so there will be a huge user base behind it. My bet is that it will become the most used graphics API, even for desktop because of this. Even though it’s unfinished, it’s in very active development, so you can expect it to be ready, not by the end of the year, but it’s really going fast. It’s always nice to work with something that is dynamic that you can talk with the people.

Of course, you get a web-ready code base for free, which, in my case, I was never targeting the web, because I wasn’t interested in downgrading my application just for it to be compatible with the web. Now, I don’t have to care about this, I can just do things as I was doing for my desktop experiment and then share it on the web, without too much concern. As long as people use a WebGPU-ready browser, which is not so many people for now. I’m just showing an example that is not 3D, that’s something that is really recent, the demo where people used WebGPU for neural network inference here for Stable Diffusion, so image generation. It’s a really relevant example of how WebGPU will enable things on the web that are using the GPU but not related to 3D at all. There’s likely a lot of other applications in a lot of other domains. To finish, just a link to this guide that I’ve started working on with much more details about all the things I was mentioning here, https://eliemichel.github.io/LearnWebGPU. I’ve opened a Discord to support this guide, but there are also existing communities centered around wgpu and around Dawn that I invite you to join, because that’s where most of the developers are currently.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Janney Montgomery Scott LLC Has $760000 Stock Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Janney Montgomery Scott LLC decreased its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 31.9% in the 2nd quarter, according to its most recent 13F filing with the Securities and Exchange Commission (SEC). The fund owned 1,850 shares of the company’s stock after selling 868 shares during the quarter. Janney Montgomery Scott LLC’s holdings in MongoDB were worth $760,000 at the end of the most recent reporting period.

Several other hedge funds have also added to or reduced their stakes in MDB. GPS Wealth Strategies Group LLC acquired a new stake in shares of MongoDB during the second quarter worth $26,000. Global Retirement Partners LLC lifted its holdings in shares of MongoDB by 346.7% during the first quarter. Global Retirement Partners LLC now owns 134 shares of the company’s stock worth $30,000 after buying an additional 104 shares in the last quarter. Pacer Advisors Inc. lifted its holdings in shares of MongoDB by 174.5% during the second quarter. Pacer Advisors Inc. now owns 140 shares of the company’s stock worth $58,000 after buying an additional 89 shares in the last quarter. Bessemer Group Inc. acquired a new stake in shares of MongoDB during the fourth quarter worth $29,000. Finally, Manchester Capital Management LLC acquired a new stake in shares of MongoDB during the first quarter worth $36,000. Institutional investors and hedge funds own 88.89% of the company’s stock.

Wall Street Analyst Weigh In

A number of equities research analysts have recently weighed in on MDB shares. Royal Bank of Canada reissued an “outperform” rating and set a $445.00 target price on shares of MongoDB in a report on Friday, September 1st. Macquarie upped their price objective on shares of MongoDB from $434.00 to $456.00 in a research report on Friday, September 1st. Needham & Company LLC upped their price objective on shares of MongoDB from $430.00 to $445.00 and gave the stock a “buy” rating in a research report on Friday, September 1st. Canaccord Genuity Group upped their price objective on shares of MongoDB from $410.00 to $450.00 and gave the stock a “buy” rating in a research report on Tuesday, September 5th. Finally, Piper Sandler upped their price objective on shares of MongoDB from $400.00 to $425.00 and gave the stock an “overweight” rating in a research report on Friday, September 1st. One research analyst has rated the stock with a sell rating, three have issued a hold rating and twenty-two have assigned a buy rating to the stock. According to data from MarketBeat, the company presently has an average rating of “Moderate Buy” and a consensus price target of $415.46.

Get Our Latest Report on MDB

Insider Buying and Selling at MongoDB

In other MongoDB news, Director Dwight A. Merriman sold 2,000 shares of the stock in a transaction that occurred on Tuesday, October 10th. The shares were sold at an average price of $365.00, for a total transaction of $730,000.00. Following the completion of the sale, the director now owns 1,195,159 shares of the company’s stock, valued at approximately $436,233,035. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. In other MongoDB news, Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction that occurred on Friday, September 1st. The stock was sold at an average price of $395.01, for a total transaction of $395,010.00. Following the transaction, the director now directly owns 535,896 shares in the company, valued at approximately $211,684,278.96. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, Director Dwight A. Merriman sold 2,000 shares of the stock in a transaction that occurred on Tuesday, October 10th. The stock was sold at an average price of $365.00, for a total value of $730,000.00. Following the transaction, the director now owns 1,195,159 shares in the company, valued at $436,233,035. The disclosure for this sale can be found here. In the last 90 days, insiders sold 187,984 shares of company stock valued at $63,945,297. 4.80% of the stock is currently owned by company insiders.

MongoDB Stock Performance

MongoDB stock opened at $327.33 on Friday. The business has a 50 day moving average of $357.82 and a 200-day moving average of $341.90. MongoDB, Inc. has a 1 year low of $135.15 and a 1 year high of $439.00. The company has a debt-to-equity ratio of 1.29, a current ratio of 4.48 and a quick ratio of 4.48. The company has a market capitalization of $23.35 billion, a P/E ratio of -94.60 and a beta of 1.13.

MongoDB (NASDAQ:MDBGet Free Report) last released its earnings results on Thursday, August 31st. The company reported ($0.63) earnings per share for the quarter, beating analysts’ consensus estimates of ($0.70) by $0.07. The business had revenue of $423.79 million during the quarter, compared to the consensus estimate of $389.93 million. MongoDB had a negative net margin of 16.21% and a negative return on equity of 29.69%. On average, equities research analysts predict that MongoDB, Inc. will post -2.17 earnings per share for the current year.

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB) Stock Moves -0.78%: What You Should Know – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB) closed the latest trading day at $326.75, indicating a -0.78% change from the previous session’s end. This change was narrower than the S&P 500’s 1.18% loss on the day. Elsewhere, the Dow saw a downswing of 0.76%, while the tech-heavy Nasdaq depreciated by 1.76%.

Shares of the database platform witnessed a gain of 0.36% over the previous month, beating the performance of the Computer and Technology sector with its loss of 2.96% and the S&P 500’s loss of 3.35%.

Investors will be eagerly watching for the performance of MongoDB in its upcoming earnings disclosure. The company is forecasted to report an EPS of $0.49, showcasing a 113.04% upward movement from the corresponding quarter of the prior year. Our most recent consensus estimate is calling for quarterly revenue of $402.75 million, up 20.72% from the year-ago period.

For the entire fiscal year, the Zacks Consensus Estimates are projecting earnings of $2.34 per share and a revenue of $1.61 billion, representing changes of +188.89% and +25.06%, respectively, from the prior year.

Investors should also pay attention to any latest changes in analyst estimates for MongoDB. These recent revisions tend to reflect the evolving nature of short-term business trends. Therefore, positive revisions in estimates convey analysts’ confidence in the company’s business performance and profit potential.

Based on our research, we believe these estimate revisions are directly related to near-team stock moves. We developed the Zacks Rank to capitalize on this phenomenon. Our system takes these estimate changes into account and delivers a clear, actionable rating model.

The Zacks Rank system, spanning from #1 (Strong Buy) to #5 (Strong Sell), boasts an impressive track record of outperformance, audited externally, with #1 ranked stocks yielding an average annual return of +25% since 1988. Over the past month, the Zacks Consensus EPS estimate has remained steady. MongoDB is holding a Zacks Rank of #2 (Buy) right now.

In the context of valuation, MongoDB is at present trading with a Forward P/E ratio of 140.87. This denotes a premium relative to the industry’s average Forward P/E of 35.52.

The Internet – Software industry is part of the Computer and Technology sector. With its current Zacks Industry Rank of 72, this industry ranks in the top 29% of all industries, numbering over 250.

The Zacks Industry Rank is ordered from best to worst in terms of the average Zacks Rank of the individual companies within each of these sectors. Our research shows that the top 50% rated industries outperform the bottom half by a factor of 2 to 1.

Make sure to utilize Zacks.com to follow all of these stock-moving metrics, and more, in the coming trading sessions.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Combining the Power of Text-Based Keyword and Vector Search – The New Stack

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

<meta name="x-tns-categories" content="Data / DevOps“><meta name="x-tns-authors" content="“>

Combining the Power of Text-Based Keyword and Vector Search – The New Stack

DORA for Dev Productivity

Are DORA metrics effectively used at your organization to measure developer team productivity?

Yes, DORA is used effectively.

0%

DORA is used but it is at best only “somewhat effective.”

0%

DORA gets talked about but it isn’t actually used.

0%

I don’t know about DORA, is she an explorer?

0%

2023-10-26 06:10:52

Combining the Power of Text-Based Keyword and Vector Search

sponsor-mongodb,sponsored-post-contributed,



Data

/

DevOps

Hybrid search engines combine them to get the best of both worlds, ultimately delivering the results and webpages that users are looking for.


Oct 26th, 2023 6:10am by


Featued image for: Combining the Power of Text-Based Keyword and Vector Search

Image from Brian A Jackson on Shutterstock.

Let’s say that your company wants to build some sort of search system. Some of your engineers prefer full-text search. Others proclaim semantic search as the future. Here’s the good news: You can have both! We call this hybrid search.

Let’s take a high-level look at why hybrid search engines might be the answer to frictionless information retrieval. Let’s go!

What Is Text-Based Keyword Search?

Before we talk about hybrid search, we should talk about the two pieces involved.

Text-based keyword search, which you might more commonly come across as “full-text search,” means that when a user looks in specific text or data for a certain word or phrase, that search will return all results that contain some or all of the words from the user’s query.

For example, when I go to my local library’s website and search “James Patterson,” it shows me that author’s books. What it’s not showing me are other similar books I might like since I enjoy James Patterson.

Search results for James Patterson books online with the local library

Search results for James Patterson books online with the local library

Side note: In fact, it looks like my library might even be using traditional search, which is even more precise. The only results I see match my search query exactly.

In other words, text-based keyword search will deliver more results than traditional search, because it’s not so nitpicky about precision.

What Is Semantic Search?

Full-text search works well if you know not only what you want to find, but also how to describe it. But what if you want to conduct a search query yet can’t think of the proper keywords? Maybe you’re using large language models (LLMs) that don’t have the most current information.

How will you ever find what you’re looking for?

In semantic search, data and queries are stored as vectors (also called embeddings), not plain text. Machine learning (ML) models take the source input (meaning any unstructured data, whether that be video, text, images or even audio) and turn it into vector representations.

But what does this even mean?

Let’s do another example. Maybe your partner loves cycling, so you decide to buy them a new bike for their upcoming birthday. But you don’t know anything about bikes. All you can think of is the Schwinn you rode as a child.

So you go on Google, Amazon or another marketplace and search “Schwinn bikes.”

Because a search engine using vector search understands that you’re looking for Schwinn bikes and probably also comparable alternatives, it might additionally show you bikes from other brands like Redline and Retrospec.

With a vector-based search algorithm, it better understands the context of your search queries. Some people call this semantic search. According to Merriam-Webster, “semantic” means “of or relating to meaning in language.”

In other words, a vector-based search system better understands the thought and intent behind search queries to deliver results you might be looking for but don’t know how to search for. With the help of artificial intelligence, vector search can deliver information outside of what large language models can provide. For context, the limitation here is that LLMs were trained using data from the internet and other sources only up to the end of 2021. Vectors (embeddings) augment this, filling in the gaps where LLMs can’t quite keep up.

It’s almost like a vector search engine can read your mind a little bit.

If you like watching videos, this one is a great overview of the technology.

A Note on Text-Based Keyword vs. Vector Search

You might be wondering how we can qualify a text-based search engine versus a vector-based one. The answer is… it’s complicated. That’s because all of this kind of happens on a spectrum of sorts.

Qualifying if a search engine is full text or vector can be tricky.

More specifically, when we break it down even further, we’re looking at dense and sparse vectors. Think of these vectors as the little worker bees responsible for rendering your search results.

With sparse embeddings, there are fewer worker bees, so the user is going to get fewer search results. However, using sparse vectors is typically more efficient since the search index will have to evaluate less information.

Dense vector search, on the other hand, typically gives the user more to work with. In our previous example with Schwinn bikes, dense embeddings might mean that the results display six different brands, as opposed to just one or two.

What does this look like behind the curtain? Sparse vectors are composed mostly of zero values. This is why if someone searches for “cat” with sparse vectors, the results might simply be “cat.”

Conversely, an algorithm with dense vectors might return “cat,” “feline” and “tabby.” With more non-zero values, dense vectors are better able to understand context and return results that don’t exactly match the query but still relate to it somehow.

Sparse and dense vectors help determine both the relevance and abundance of the results.

Why Combine Them into a Hybrid Search?

Not sure which approach is best for you? Well, you can have your cake and eat it too. And everybody loves cake!

How exactly do we do this? It’s called hybrid search.

Hybrid search engines combine both search methods to get the best of both worlds, ultimately delivering the results and webpages that users are looking for.

Think about our previous example with bicycles. A text search offers the benefit of more fine-tuned results but only if you know that you want to see solely websites that reflect your search query specifically.

However, if you’re still shopping around, your query might limit you to one brand, and you might miss out on the perfect bike for your spouse.

On the other hand, the beauty of semantic search is that it goes above and beyond what you type into the search bar. Unfortunately, these search systems tend not to be the most efficient.

With a hybrid search engine, you leverage the strengths of both approaches (including the strengths of sparse and dense vectors) to get hybrid results. Think of a hybrid search engine as something that can cast a very wide net but still target a specific type of fish. It’s the optimal combination.

To achieve the ideal search experience, hybrid search engines will pack the greatest punch.

Hybrid Search Is the Future

Search engines are constantly evolving. If we look at the almighty Google as an example, one thing is for certain: The search engine’s No. 1 priority is to give the people who use it the most relevant results with as little effort on their end as possible.

Combining full-text search with vector search into hybrid search is proving to be the most effective way to do that.

We can all learn something valuable here, and MongoDB is making search more convenient than ever. We love vector search and have even incorporated it into Atlas. This allows you to build intelligent applications powered by semantic search and generative AI. Whether you’re using full-text or semantic search, Atlas Search’s hybrid approach has your back. It lives alongside your data, making it faster and easier to deploy, manage and scale search functionality for your applications.

Accessing data and building AI-powered experiences has never been this smooth.

Group
Created with Sketch.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.