Mobile Monitoring Solutions

Search
Close this search box.

Grafana v10.3: Visualizations, Alerting, Management and Log Analysis Improvements

MMS Founder
MMS Almir Vuk

Article originally posted on InfoQ. Visit InfoQ

The Grafana 10.3 release introduces a range of enhancements for visualization, instance management, alerting, and log analysis. These upgrades include improved tooltips and zoom functionality for data navigation, alongside features for tracking metric changes and visualizing system health. Additionally, enhancements in alerting organization and log analysis are also available.

The first notable improvement is the addition of enhanced tooltips for data visualization. These tooltips feature colour indicators for easy data differentiation, uniform time display across panels, and support for longer labels, providing users with more detailed information and a consistent experience across all Grafana panels.

Another enhancement is the introduction of pan and zoom functionality within the Canvas panel. This feature allows users to navigate through data as reported more effectively, which is particularly beneficial for those working with large-scale or highly detailed canvas visualizations.

Users can now track metric changes over time in stat panels, which, according to the Grafana team, enables an easier understanding of metric growth. Integration of colour indicators for percentage change offers a quick way to identify trends in data. Note that this feature is generally available in all editions of Grafana.

Furthermore, enum values can be plotted in time series and state timeline visualizations, enabling users to visualize system and service health effectively. This feature utilizes the convert field transformation to display enum values, enhancing visualization capabilities.

The release also introduces improved management capabilities for Grafana instances, including enhanced control over anonymous access. Users can now monitor anonymous devices connected to their instance and limit the number of anonymous devices for better security and resource management, with a note that this feature is generally available in Grafana open source and Grafana Enterprise

Furthermore, the release offers the ability to query across multi-stack data sources, simplifying the querying process for users managing metrics or logs across multiple tenants in Grafana Cloud.

Another significant improvement is the enhanced reporting experience, allowing users to share complete table data in PDF reports. Two new options are introduced: embedding all table data as a PDF appendix and generating a separate PDF for table data.

In terms of alerting, the release includes improvements in alerting contact points organization and visibility, simplifying the alert management experience. The UI now displays notification policies linked to each contact point for improved understanding of alert configurations.

Regarding the Log analysis, the introduction of a new popover menu in Grafana brings improvements for search. According to the original announcement, this feature simplifies searches and adjusts queries by automatically offering options to copy text and add filters. As stated, it is compatible with various log data sources, such as Grafana Loki and Elasticsearch, while providing improved efficiency in single and mixed data source modes.

In addition to the release of Grafana 10.3, users could find it helpful to explore the official YouTube channel, which published a playlist of 16 videos showcasing all the new features available in this version. These videos provide detailed demonstrations and explanations of each feature, offering users a comprehensive overview of the enhancements introduced in Grafana 10.3.

Lastly, to address a technical issue within the Grafana release package management process, Grafana 10.3.0 and Grafana 10.3.1 are being simultaneously released. Grafana 10.3.1 contains no breaking or functional changes compared to 10.3.0. Users can explore the documentation for Grafana 10.3.0.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Enhancing Observability: Amazon CloudWatch Logs Introduces Account-Level Subscription Filter

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

The recent update to Amazon CloudWatch Logs introduces support for account-level subscription filtering. With this enhancement, developers can now access a real-time feed of CloudWatch Logs from all logs groups and have it delivered to a single destination for further processing.

The implementation of a single account-level subscription filter enables customers to deliver real-time log events that are ingested into Amazon CloudWatch Logs to a Kinesis Data Stream, Amazon Kinesis Data Firehose, or AWS Lambda for custom processing, analysis, or redirection to alternative destinations. It is possible to set an account-level subscription policy that includes a subset of log groups in the account.

All the logs that are sent to a receiving service through an account-level subscription policy are base64 encoded and compressed as gzip files. Designed to reduce the overhead of managing large and complex AWS deployments, the account-level subscription filter applies to both existing log groups and any future log groups that match the configuration. Jeremy Daly, CEO and founder of Ampt, comments:

This is a dream come true for those of us who wrestle with tens (if not hundreds) of log group subscription filters.

The put-account-policy API can be used to set the CloudWatch Logs account-level subscription. For example, using the AWS CLI, the following command sends all log data to the helloWorld Lambda function, excluding the group names LogGroupToExclude1 and LogGroupToExclude2.

aws logs put-account-policy 
	--policy-name "ExamplePolicyLambda" 
	--policy-type "SUBSCRIPTION_FILTER_POLICY" 
	--policy-document '{"DestinationArn": "arn:aws:lambda:region:123456789012:function:helloWorld", "FilterPattern": "Test", "Distribution": "Random"}' 
	--selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' 
	--scope "ALL"

Source: AWS documentation.

Regardless of the chosen destination, AWS stresses the importance of evaluating upfront the volume of log data that will be generated to avoid throttling. Developers should make sure that the Kinesis Data Firehose stream or Lambda function can handle the volume, or that the Kinesis Data Streams has enough shards. With Kinesis Data Streams, throttled deliverables are retried for up to 24 hours and then dropped.

The cloud provider warns as well about the risk of infinite recursive loops with subscription filters, triggering large increases in ingestion billing. The team provides advice on recursion prevention:

To mitigate this risk, we recommend that you use selection criteria in your account-level subscription filters to exclude log groups that ingest log data from resources that are part of the subscription delivery workflow (…) When excluding log groups, consider the following AWS services that produce logs and may be a part of your subscription delivery workflows: Amazon EC2 with Fargate, Lambda, AWS Step Functions, and Amazon VPC flow logs that are enabled for CloudWatch Logs.

Referring to the new features added at re:Invent, and the recent announcement that CloudWatch alarms now support AWS Lambda functions as an action for state changes, Ran Isenberg, principal software architect at CyberArk, notes:

CloudWatch is on a roll lately and is making up lost ground in comparison to third-party observability tools.

CloudWatch Logs Account-level Subscription Filter is available in all AWS commercial regions except Israel and Canada West.

Each AWS account can create one account-level subscription filter.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: The State of Software Engineering from an Academic Perspective

MMS Founder
MMS Martin Kropp Craig Anslow

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, we’re sitting down, in person for a change, in sunny Otaki in New Zealand.

We’re going to take a look at the academic perspective on software engineering. I’m joined by Martin Kropp and Craig Anslow, and I’ll ask the two of them to introduce themselves.

Introductions [00:44]

Martin Kropp: I’m Martin Kropp. I’m a professor of software engineering from Northwestern University of Switzerland, which is situated close to Zurich. I’m spending currently my sabbatical here with Craig Anslow at the Victoria University of Wellington. I’m doing mainly research in Agile methodologies, improve the way we develop software, and hopefully making it even more efficient in the future.

Shane Hastie: And Craig?

Craig Anslow: Good day, kia ora. I’m Craig Anslow, a senior lecturer at Victoria University of Wellington, Te Herenga Waka. That’s in New Zealand. I’m a senior lecturer in software engineering, and I’ve been doing the role for about seven years. First exposed to Agile processes 20-plus years ago in my undergraduate studies, and now I’ve been teaching Agile methods for about a decade now at several institutions in Canada, UK, and now New Zealand.

My interest is in the general area of human aspects of software engineering. We look at Agile processes, we look at Agile tools, and we look at actually applying some of those into developing software into novel domains for software developers and in digital health.

Both Martin and I have been involved in several research projects studying Agile processes and we’d love to share some of that information today with you.

Shane Hastie: Thank you very much. So, very wide statement. What is the current state of software engineering around the world at the moment? What are you seeing in the academic space?

Trends seen from research [02:14]

Martin Kropp: One big topic is using AI for software development. That’s definitely a huge topic currently. Hard to see where it will lead us really in the future. That’s one.

DevOps, of course, maybe less from the academic point of view, much more from the practical point of view. I’m from a University of Applied Sciences, so I’m really closer to engineering. DevOps, it’s a huge topic. Including more and more the security issue, DevSecOps is the term for it. Which will probably need much more investigation, how to include these aspects into the software development process from the software engineering point of view.

Don’t automate without first fixing people and process challenges [02:54]

Craig Anslow: I would say automation. I think developers and organizations are looking at how to automate things, make things faster, quicker, and so on. I think AI helps in that context. But also, I think the area that we focus on,  software development processes, is very critical. Without good software development processes, you’re constantly in a back fight with your organization of not having an efficient process to actually get stuff done to be able to figure out what to automate. I think that’s where a lot of organizations are struggling on the people process side of things.

I think if you solve those, that would lead to better automation. So there’s lots of technical areas that are focusing on automation of build scripts, deployment, continuous integration to continuous deployment. Along with the other things, search-based generation of code, search-based interfaces to create code. So automating of code creation, code searching, finding components, making the code work. I think holistically putting that all together fundamentally, if you don’t have good processes and teamwork, your project is doomed for failure.

Another important aspect that’s developed recently is diversity and inclusion. I’m not focusing on that specific area, but there are various people that are focusing on how to make teams more diverse, more inclusive, and I think that’s certainly a hot topic in the space. It’s not something I focus in on, but there are other people focusing on that.

Once you start to generate or focus on automation and you bring in AI, there’s a lot of people focusing on ethics. What is ethically right to do with the software that we’re developing and building? Does it make sense? I think those particular topics, ethics, inclusivity, and diversity aren’t my areas, but I’d see that certainly as a hot topic that people are focusing on in the software engineering world.

Challenges of remote work, especially for new graduates [04:37]

And as well as being automating, I think another area that lots of organizations are struggling with, because software is a very transparent abstract thing, is that actually having remote workers. There are some organizations that are not clearly geared up for being distributed or remote, and are struggling to be able to adapt to that. Where opensource companies like the GitHubs, the GitLabs, have been doing that since day one. They’re used to doing that.

Other organizations have been forced to do that with the recent COVID pandemic and now are saying, “Hey look, maybe that wasn’t actually the best thing to force everybody to do it. We want you back in the office.” So trying to focusing on trying to understand better workers’ needs and how to best support them. I think the hybrid model, where you are working some days from the office but also working remotely, I think that’s certainly a hot topic that software engineering people are studying but also from an organization perspective.

We hear that from our new grads that, “Well, we are a new grad. We want to go out there but we haven’t worked in a team physically before. So how do we do this hybrid thing?” So not only struggling from the university studies, so new grads certainly struggle with that new hybrid type of model. So I think there’s certainly a lot of need for software engineering organizations to figure out how to best do this and support their teams.

Martin Kropp: You mentioned companies like GitHub or the big digital players like Google. They are developing Agile, or organized Agile from the beginning. I think it’s a big issue in traditional companies. I did recently an interview with a guy from a large Swiss bank where they really have the clash between the technical departments, which are organized Agile and also are really product-oriented, and then there are the business guys in the large organization which have a completely different view on how to manage the products and the projects.

There is this clash still. So traditional companies fight really with this current transformation process, and then there’s no solution yet really how to do this properly.

Craig Anslow: Yes, so I mean for example, Silicon Valley companies have forced employees to work and live in Silicon Valley. But the recent trend from the pandemic is people want to get out of those places because they can’t afford to live there on the salaries that they’re making, so they want to go to more cheaper places and work remotely. I think one novel company in this part of the world who does focus on Agile tools and prices is Atlassian, based in Sydney, and they opened up their market for having employees distributed.

As long as they work certain hours within the time zone, they’re actually being able to pick up a better talent pool. So I think companies realizing that not everybody wants to live in these specific locations where the headquarters are has actually opened up a better talent pool for some of these companies. I think that’s one advantage that the COVID pandemic has actually pushed upon organizations. Organizations that are embracing change, which is an Agile thing, and that change is that the workers want to be able to work where they want to work.

And if you look back at the history of New Zealand, a famous professor, Sir Paul Callahan said, “Go where the talent is,” focus on where people want to work. If you can make organizations in remote places like New Zealand, for example, that’s where you can build extremely successful companies. Obviously Wētā Digital is an example of that here in New Zealand, in Wellington, where they are producing world-class software for the movie industry in a small, small place. Some of those people are remote as well. But I think that’s something that’s going to be an ongoing thread going forward. I don’t think that’s going to go away. You definitely want your organizations to be distributed.

Shane Hastie: So Martin, coming back to the DevOps question, isn’t this a solved problem?

DevOps is not a solved problem yet [08:16]

Martin Kropp: From the technical point of view, it might be a solved problem. But here I come back to the business view again, if you can deploy it but you cannot manage it on the business side, the product management, then you have a problem. I think it’s more really from the process side and organizational point of view than maybe from the technical point of view.

Shane Hastie: Thinking of our audience, who are technologists, technical leaders, what do they need to do to bring this DevOps mindset into their organization?

Martin Kropp: I think a very strong product orientation, also for the developers. I think there are some top companies who have implemented the DevOps process, and also the technical point of view. But I think a very huge part of the companies recently have a small mandate from a classical manufacturing company, who is still developing very traditional and there is still a huge portion in the market that hasn’t yet implemented just the DevOps technologies. So they need to be trained, probably, really very well to adjust to this product view, the DevOps view.

High automation that you have mentioned Craig. Also really implementing including the quality in the development. There’s still not very much practiced. We mentioned TDD, I think just a minority of companies are usually applying TDD in a proper way. So now we still have a lot to do, I think, to guarantee the quality that is needed for the product really, for software product.

Shane Hastie: So let’s dig a bit further into that product focus versus project focus. What is the big shift there?

Shifting from project to product focus [10:00]

Martin Kropp: I think the big shift is the long-term planning where you, with a project you’re focusing on maybe one, two, three years or maybe for larger project five years. But with the product view we have a long-term view. So I think that the whole planning will differ, different kind of budgets from the process and organization point of view.

And from the technical point of view, is that you have really to implement and build in the quality assurance measures from the very first point. So you cannot wait until the end of the project writing tests because there is no end of a project in the product view. You have to build in the quality assurance from the very beginning. There we still have a lot to do in, also from the education point of view to really bring this mindset to the coming engineers that we are educating.

Shane Hastie: Let’s talk about that education, what needs to change in the training of software engineers?

What needs to change in the education of software engineers? [10:58]

Martin Kropp: We started recently discussions at our university about this, and we didn’t yet find a solution. I mean, offering more software product development-oriented courses might help. I, myself, am teaching a course, it’s called Software Construction and another course, Software Testing Quality Management, where we are covering testing methodologies, testing practices in detail. But still, it’s an isolated course, so it’s not included in the normal programming courses. So students, of course, still then learn programming and learn testing in a different course. So it’s still separated, and I think the challenge is to bring this really together, and maybe doing more projects.

We are currently discussing solutions that we really want to develop software products, and students have to continuously develop and evolve these products over many years. So students join the product, and then when they finish the study, they’re leaving the product and new students will join the product. That’s something we discuss currently. So they have to learn reading code from the very beginning. I think that’s something we hardly teach actually, at least at our university, getting to read and understand code, which is probably 70, 80% of what we’re actually doing as a software developer.

Yes, we are developing ideas but we haven’t yet come to a conclusion. But I think getting to maintain, evolve existing software systems might help to also propagate this mindset of a software product.

Craig Anslow: Yes, I think the advent of the success that we’ve had in the software industry is that we’ve built a lot of tools, which means a lot of people can program, which means a lot of people can produce code quickly, which is good. But it’s also really, really bad because it means that you’re going to come across the issues which are going to be security vulnerabilities, quality of code.

So if we’re using this automated generation code generation tools, how do we know that that actually is correct and does what it’s supposed to do? I think coming back to your question around software education or software engineering education, there isn’t enough people who get properly trained in the software engineering programs. There isn’t enough, and AI is getting some programs up and running.

There’s cybersecurity programs up and running now. But there ain’t enough engineering types of people out there to be able to produce the quality assurances that we need, the security aspects that we need, and just the quality of design.

And so this is where we’re getting into issues like technical debt, like security vulnerabilities that need to be resolved, lots of bugs crashing, crash reporting, and so on. So the quality of code is getting worse, I believe, but we’re producing a lot more code. So there isn’t enough bodies to be able to significantly get properly trained through university education system.

You can learn lots of stuff online, but there’s a lot of stuff that you just can’t absorb in a few months to get a qualification without actually sitting and doing a proper formal education in this space. I mean, obviously I’m a traditionalist and I’m advocating for a university-style of education, but if you go and do a bootcamp for six weeks to two months, less than six months of a training program and come and expect to be writing mission-critical software, it’s just not possible. So it’s a balance of having the right level of education for the right type of role.

I think there’s always going to be a need for people to be able to produce code quickly and through some of those bootcamps, self-train yourself. But if you look at the fundamental big tech companies, they want people that have studied deep tech to be able to come work on those fundamental systems, the banking systems. You wouldn’t just trust a new person that’s had not a lot of education and a lot of experience to get there.

The education system is one way for people to get their foot in the door. If you look at places like internship programs, the one here in New Zealand which I’ll plug is Summer of Tech. They have about 1000 students on board, 300 jobs. So 30% of people that are going to get internships, that’s not a lot of internships that are going, that are available, and there’s not enough positions for that. There’s a glut of people, but not all of them are formally trained, they’re half trained.

Making software engineering an accredited profession? [15:24]

I think the other aspect and the flip side of that, to address that software engineering education question is that once people are unleashed unto the world, they’ve done their formal training, there’s no checkups on what their portfolio of work is doing. So if you’re a certified engineer in countries like Canada and so on, you get a professional engineering, you belong to the professional engineering organization, sometimes you have to submit portfolios of work in some of these spaces. But in software engineering, no, there isn’t any of that.

A lot of the programs are accredited to the Washington Accord, the standards of programs that are measured for the quality of education, and all New Zealand universities and many overseas are also getting accredited, these bodies. But once you leave the university institution, there isn’t actually any formal education. There are training certifications such as Scrum Master, The SAFe, IC Agile, those kinds of programs. But people aren’t actually checking the portfolio of work that people are continually doing compared to say design portfolios or art portfolios.

So I think there isn’t ongoing training, there isn’t ongoing education necessarily that people need to be able to do to perform and say, “Yes, I’m certified.” So while they might be certified with these certificates and training things, there isn’t actually an overseeing body which is overseeing the certification of this engineering. The catch is, if you do that, then you put a certifying body over the top, then it becomes extremely regulated. I’m not sure that all organizations want to be regulated in that context. Companies that are pitching for bids and so on, they do want to have people that are certified in certain professional programs, but often most of them aren’t checking what the undergraduate, or their graduate, or formal education is.

So I do see that there’s a greater need for professional education, and both Martin and I teach on professional education master’s programs, which are teaching people that have industry experience to come back and do some of maybe that formal education that they missed if they didn’t do it as an undergraduate, or clarification or quantification of some concepts and ideas. But I believe there’s a good market there for doing more university, formal education or at least not necessarily certified by specific body such as Scrum, or Agile Alliance, or IC Agile, or Scaled Agile framework. If there’s some other independent body, I think that would be a good thing that allows people to be able to do that. Then you can say, “I’m a professional at doing this,”. You have to submit a portfolio of work, not just to sit on a two-day training course and say, “Yeah, I’ve done those exercises. I know what those mean, and I can answer those questions,” but actually submitting code.

And one answer to that is GitHub, right? You can have a portfolio of work on GitHub and show that stuff that you’re open, but then on the catch-flip-side is, if you’re working at a private organization, you can’t necessarily show the code because it’s of NDA and all those kinds of things. So there needs to be somewhere in the middle where people that do want to show that they can know stuff and know things, or keep up that knowledge, that acquisition that you can show that you can do that.

Whereas going into a more sophisticated software, such as safety critical software, very important infrastructure code, those people aren’t checked that they still know this stuff. We know as you get older, you start to lose things and you’re not as proficient as you are. I mean that’s just general life knowledge. So maybe there’s a certain point where you’re too old to program maybe, I don’t know. But then we’re seeing rejuvenation of people that are COBOL programmers, because a lot of stuff is legacy code in that space. I think at some stage it would be useful to potentially have some kind of body that allows you to submit portfolios of stuff.

There may be a case of having education programs that support that. So there’s a big movement of the craftsmanship, software carpentry led by, I’ve forgotten the guy’s name from Toronto, just momentarily forgotten the guy’s name. It never works in theory, but there’s people doing that software carpentry programs, which is a form of education and it’s basically training people on very specific skill sets. So that’s a really good thing. I think having more of those things out there as programs that would help to certify or clarify independent professional practices. I think that’s what would be something that I think would be good going forward. That’s a really good movement there, software carpentry, and I think we could have more of that.

Sometimes it could be organizations that are leading that as well. So you could be organization proficiently as opposed to we’ve got X amount of people that are certified in this thing, but that just means that they’ve done that certification, doesn’t actually show the portfolio of work. I think that would be a step. I’m not saying it will work, I’m saying that we should try. You don’t know until you try. It could be a bad thing, it could be a good thing, I don’t know. But that’s just one approach to answer your question.

Martin Kropp: Maybe even for universities, that students not just make a degree in software engineering or whatever, but maybe get a competence portfolio certification where you see, okay, he has competence in UX, in security, maybe in a kind of spider diagram where you see how much has somebody done in this area, in this area, so that we offer different kinds of certificates so that industry sees really, he has competencies into this area.

Craig Anslow: Yes, because I think most of the certifications at the moment that are out there, the Agile processes ones, which I mentioned, but there’s also ones that are Microsoft certified, Java certified, programming language is specific certified, and that shows the competency in those things. People are doing those.

But I think if you want something that’s more the software engineering, the generalized thing, then there isn’t an independent body. Maybe that’s something the ACM should be doing or IEEE, something that’s quite independent from the rest of a corporation or a university, which as beyond just an undergraduate or beyond the professional certification, that is a possibility. I don’t know.

Shane Hastie: Craig, I know an area that you’ve been actually doing some research in is the junior versus senior engineers, software engineers and the competencies, and how recent grads, junior engineers are fitting in and the difference between the way that a junior engineer works and a senior engineer works. What are some of the things you found in that research?

Research into junior and senior software engineers [21:25]

Craig Anslow: So just a quick summary of that. So it’s actually joint work with Martin, and others, and me. We did a survey with industry practitioners, particularly in Switzerland, for several years, which Martin led. Then the more recent surveys that we’ve been doing across the world in several countries, so Canada, UK, New Zealand, Switzerland as well.

The summary is that we found that the junior developers that have had just a few years of experience, I think less than three or four, were mainly focusing on getting proficient at the technical competencies. The ones that had 10-plus years experience, that are more senior developers, had already mastered those technical proficiencies and were focusing more on the process side of things and the people side of things. That’s where we saw the real benefit of adopting some of these practices and becoming more efficient and getting more effective results.

It takes time to master the technical practices, but the more important ones where actually the people practices. Maybe Martin, you want to add a little bit more to that?

Martin Kropp: Yes, exactly what you said. In these studies we put especially focus really on the experience of the developers. Also, very basic technical practices were only used by juniors, like unit testing or continuous integration. More advanced like TDD or ATDD were only really then applied later with more experience by the senior developers. So even on the technical practices side, the more experienced one use approaches, advanced topics like TDD and ATDD. So I think that’s something we can work on and try to bring these technical practices also earlier to the junior programs, so that they really start from the very beginning to apply them.

But the major finding was indeed that the process and organizational issues, they really take time, because they are combined with the cultural change in organizations. They are more, then, applied by the senior developers.

Craig Anslow: You can find some of the research on the The Swiss Agile Research Network website.

Software development is a stressful profession [23:27]

Additionally, what we did find is that regardless of junior, intermediate, or senior, which we studied in the breakdown, is that Agile is stressful. It doesn’t matter if you’re a junior or a senior, it’s stressful regardless. So I think one of the things with the Agile movement, which has been around 25-plus years, is that they were trying to make Agile less stressful. But it hasn’t actually done that at all. It is still stressful.

So software development is still stressful regardless if you’re a junior or a senior person. This result of the studies that we have found, do we have a silver magic bullet to make it less stressful? I think the answer is no. I don’t believe it’s ever going to be non-stressful. It’s a job, it’s work, it’s productivity. How do you make it less stressful? I’d say retire and go to the beach. But that’s not going to happen. People have got to pay their bills. So I think there’s some things that we can do to make the environment better, but in general we found it was stressful, regardless.

This is a very small study, several hundred people, so in the grand context, there’s millions of developers around the world. So more research would certainly need to be done to clarify that, in different situations, in different organizations, and so on. But the few studies that we’ve done, we’ve found it as stressful.

Shane Hastie: It is a high pressure career.

Craig Anslow: Yes. The rewards are good too, right.

Shane Hastie: We touched a little bit on automation. Let’s go deeper. The AI tools, how are they helping us, and how are they potentially hindering?

Model-driven engineering is not being used extensively [24:52]

Martin Kropp: I think it’s, at least for me, it’s not that clear where the way will go with AI tools. They can help us really generate code. They might even help us to generate tests that test the generated codes. So in this way they might help to automate the code generation really. Myself too, less experienced with using AI tools for software development. But that might be a way how it can help us to automate, really, code generations.

Craig Anslow: I think there’s another movement there that’s been overlooked recently, in the last few years with the impetus of AI, and that’s modeling, UML, diagrams. So there was a big movement in the model-driven engineering community, and it still exists and it’s still doing pretty well, from what I can see. I have colleagues working in this area. That’s to push model generation to write, and automate, and generate code.

So that was all about code generation from diagrams and pushing the designs from a business analyst gathering requirements, writing diagrams, working with the developers, talking to customers, producing designs of the diagrams of what the code would look like, and pushing out templates of code, which you would then fill in the algorithms. I mean, we’re doing the same with AI here in that context. To me, the studies that we’ve found, there isn’t a lot of people doing model generation in practice. There are some toolings out there, and you can speak to some of the gurus in the UML world.

AI tools can help with some aspects of code generation [26:17]

But in general, we’ve done studies as well, that also ask people do they use diagrams? Do they do modeling tools? A well-known paper from Maryann Petri from XP 2013, she said that about 50 people that she interviewed, only about 9 of that 50, I think it was 9, we’re actually only really using UML diagrams. So there’s a very small percentage, in general. It’s a very small study again, but the purpose of model-driven engineering was to generate code through models.

It’s the same with AI, that I think the same here will be a series of experiments, lots of people trying to generate code using tools like ChatGPT, which I don’t have a lot of experience at, and it’s only been out for just over 12 months. It’s not been able to talk for very long. Lots of people are using it to generate various things, and some of that is code.

The question is can it produce complex code? I somehow doubt it. It can probably generate good outlines and frameworks and so on. But there’s still the level of detail, still the hardcore software engineering aspects, the very niche IP of the algorithms and so on, which is very unlikely to be able to be generated by AI, or these tool generation things. So I think you’ll still need that human element, but is there are a tooling out there that can actually expedite some of the things that we can do that make some of the grand nuts and bolts really slow, faster? That would be great.

So for example, the movement of refactoring, Bill Opdyke wrote a PhD on this, a nice book by Martin Fowler called Refactoring, and there’s a whole bunch of discussions around how do we refactor code? Then some of that stuff’s now been integrated into IDEs where you can do Extract Class to create, getter and setter methods, your mutator accessor methods and so on. There’s things like that. So the tooling has helped automate some of the refactoring. So I see something like that, where you can use AI tools inside your IDEs to generate some of the code that you actually want to be able to do.

People have been doing diagrams and doing hand recognition to create models that then generate a code. So I see that some of the stuff will be integrated into the software development pipeline. It’s got to be, it’s speeding things up. So I think it will certainly get there. I think the answer is, we don’t know specifically what will be successful or not. So I think probably over the next decade time, going back to your other question is, I think lots more experiments will be happening around software engineering and AI. And that’s a pretty hot topic at the moment around generation, around code finding, code searching and bugs, right? Debugging.

We are doing a project at the moment around crash reporting using genetic programming. So clearly we are using AI ourselves. We’re not experts, far from it. But we are using existing methods to actually look at how to report crashes and software bugs, in particular JavaScript programs.

So I see lots of experiments using AI techniques and tools, and developing and extending those, and actually helping software developers. I do see that that is something that is going to happen. There’s going to be more of it. Going back is, we don’t talk about we’re doing refactoring, but we are doing it. So another paper by Emerson Murphy-Hill and others saying, we know we’re refactoring. Well, I can’t remember the actual title of the paper, but it’s not about we are specifically refactoring, we are actually doing it.

We just don’t thinking about it. So I think the issue now is that AI is very prominent, and we’re thinking about doing AI at the moment. I think in future years it’ll just be common practice. So I think it’ll just be something that we do not actually give it a label. That’s probably something backed with Agile. We are doing Agile practices, but we don’t think of it as Agile. It’s just something that we do.

Martin Kropp: I completely agree. I think it’ll not be a revolution, the AI tools. It’s much more an evolution on different levels, as you said. Today, you can integrate Copilot in your IDE and it really helps a lot in coding. It makes, very often, very good suggestions. So it helps on the low level to generate code and improve the code.

If we will really be able to generate code from a short description or whatever, or complete programs, I think that’s something the future will show. I think that will be really adapting the tools, improving the tools. So it will be much more an evolutionary process, in the Agile sense, as you said. So I think the way we’ll go in this direction and how it’ll end up, we will see then.

Shane Hastie: Interesting stuff. I’d like to end with advice for young players, advice for junior software engineers. They’ve done the training, they’ve got the basic skills. What should they be looking at and thinking about?

Advice for young players [30:42]

Martin Kropp: Maybe coming back to the article from … What was it, “Back in Gamma, Love to Test Software?” Yes, in the sense really, start loving building quantity in the code. I think that’s very important for the future, because it will need to be able to- with the whole digitalization of the economy, we need to be able to really deliver products very fast. For this, you have to build in the quality from the very beginning.

Craig Anslow: Yes, I would say learn as much as you possibly can. Try different languages. Don’t be set on just one specific programming language or one specific framework. Try different things. Try different organizations. Do not burn your bridges. Always move with integrity. I think it’s also worthwhile exploring different domains as a software developer, working in the healthcare sector is one, but working in financial tech, and you learn different things from different domains and applying those I think is really useful. Spending a few years in each of the different domains would be particularly useful.

I think the advantage now is, we didn’t quite discuss it in detail here, is that the pandemic, COVID, has actually opened up a lot of opportunities for companies to employ people from all around the world. So I think depending on where you live, definitely look at some of these remote-type positions. So you don’t necessarily have to travel to those other distances to work in those countries, because visas, and issues, and stuff like that can be the problem to get into some of these countries.

But working for different companies remotely, or even overseas, I think would be a good thing before you settle down with your life, whatever that happens to be. But getting experience for other cultures and other organizations, the way that people do things, and learning different techniques and languages.

And continuing to learn the whole time. I think that’s one of the things if you look back at Agile, and one of the things that my colleague, Alan Kelly always says, and which is also true in the Agile movement, is, “If you’re doing the same thing after three months, you’re not doing Agile.” So if you keep doing the same thing, and that’s the same with, I think, junior people, I think they need to keep learning and keep doing new things and trying things.

Part of being a software engineer and having that craftsmanship is being able to learn, and adapt, and try different things. If you’re not continuing to do that, you’re not doing your self-learning. Then that’s one of the things that, if you look at the Agile Manifesto, talks about is learning, self-reflection, reflection in general on your team processes, but also individual. I think that’s something that I would always encourage young people to do is just keep learning.

It doesn’t necessarily need to be a formal education, it doesn’t have to be a certification, but online learning, there’s so many resources out there from compared to 20-plus years ago. There’s just an abundance of information and you’re not going to learn it all in one day or a few days. It’s going to take a lifetime generation. But I think having that hunger and that need to learn would be my best piece of advice.

Shane Hastie: Gentlemen, thank you so much for taking the time to talk to us today. Really interesting conversation. If people want to continue the conversation, where do they find you?

Martin Kropp: You find me on our website, of course, Martin Kropp at the FHNW website. Happy to answer any questions or remarks also you have.

Craig Anslow: You can find me in Middle Earth, middle of Middle Earth, which is in Wellington, Aotearoa, New Zealand. I work at Victoria University of Wellington. You can find me there, or Google me, Craig Anslow, Twitter, LinkedIn and so on.

Shane Hastie: Thank you so very much. I’ll make sure all of those links are included in the show notes.

Mentioned:

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


InfoQ & QCon Events: Level up on Generative AI, Security, Platform Engineering, and More Upcoming

MMS Founder
MMS Artenisa Chatziou

Article originally posted on InfoQ. Visit InfoQ

For teams building and operating software systems, the need to navigate short-term and long-term critical priorities has never been more pressing. As software professionals, we understand that you’re continuously faced with challenges that require solutions. Topics like generative AI, scaling cloud-native architectures, performance engineering, resilience and modern distributed system design are no longer just buzzwords, but pivotal elements in practically all software development roadmaps.

As we navigate through these transformative times, the upcoming InfoQ events stand as a platform to help you stay ahead, learn valuable insights, and find practical solutions to your development challenges in 2024 and beyond. Our conference schedule for this year includes:

  InfoQ Dev Summit QCon Software Conference
Timeline
  • Boston: June 24-25
  • Munich: Sep 2024
  • London: April 8-10
  • San Francisco: Nov 18-22
Focus Essential topics that development teams should prioritize now for implementation in the upcoming 12 to 18 months. Innovations and emerging trends you should pay attention to in 2024-2025 to stay ahead of the adoption curve.
Content   Features 20+ technical talks by senior software practitioners over 2 days, featuring 20+ speakers. Features 75+ talks across 15 strategic topics that cover technical, leadership, and cultural domains over 3 days, featuring 90+ speakers at each event.
Adoption   Early Adopters and Early Majority Innovators and Early Adopters

Who should attend our conferences? The events are carefully curated for senior software engineers, architects, and team leaders, offering practitioner insights into emerging trends, patterns, and practices and helping dev teams implement actionable advice to their technical priorities.

Navigate your current development priorities at InfoQ Dev Summit Boston

This two-day in-person InfoQ Dev Summit provides practical strategies for software development teams to clarify short-term critical development priorities. Featuring 20+ technical talks by senior software practitioners over two days (June 24-25) with parallel breakout sessions, the InfoQ Dev Summit emphasizes essential topics that development teams should prioritize now for implementation in the upcoming 12 to 18 months.

The first talk overviews of the InfoQ Dev Summit Boston include:

Beyond the talks, we’ve planned two evening social activities to deepen connections and exchange ideas in a casual, engaging environment – moments that often lead to the most transformative insights.

In response to numerous requests, we’re delighted to announce that we’ve extended our early bird registration period by one week. You have until February 20 to secure your ticket and enjoy savings of up to $200. Take advantage of this opportunity to join us in Boston!

Level-up on software emerging trends at the upcoming QCon Software Conferences

QCon London (April 8-10) and QCon San Francisco (November 18-22) international software development conferences offer insightful learnings from senior software developers driving change and innovation in software development. Each conference features 15 strategic topics across technical, leadership, and cultural domains over three days, featuring 80+ speakers at each event.

QCon London 2024

At QCon London  (April 8-10), deep dive into 15 major technical topics our program committee of distinguished senior software leaders has carefully curated. Learn about the topics that matter most in software tomorrow and join a community of senior software engineers, architects, and team leads from early adopter companies, including; Apple, UBS, JP Morgan, Monzo, NN, Klaviyo, Wix, Cisco, Mercedes-Benz Tech Innovation, OpenCredo, Canon, Airbus Operations, Expedia, Equal Experts, ING, Alfa Financial Software, Salesforce, plus many more. Register until February 13 to save with our limited early bird offer!

Here are five tracks you won’t want to miss:

  • What’s Next in GenAI and Large Language Models (LLMs): Explore the dynamic world of Large Language Models (LLMs) and their transformative potential across industries. Dive into the latest LLM research and development, discover their types, capabilities, and limitations, and explore their implications for the future of work, education, healthcare, and more.
  • The Tech of FinTech: Explore how FinTech startups and traditional financial institutions leverage innovative technologies, like cloud computing, to break regulatory barriers and enhance their business strategies.
  • Cloud-Native Engineering: Explore building cloud-native applications with scalability, resilience, and adaptability. Discover modern engineering practices in this session, with a practitioner-driven focus on effective strategies and pitfalls to avoid.
  • Securing Modern Software: In this track, we will explore where innovation meets cyber security and how we can bring security into our definitions of software quality – taking ownership of keeping our data, people, and systems safe from within the development team.
  • Architecture for the Age of AI: This track focuses on sharing practitioner-driven insights on what works (and what doesn’t) on AI-focused software architectures, enabling you to build and sustain the AI-based systems of the future. Explore the latest trends and techniques for building modern software architecture for AI systems and applications.

QCon San Francisco

QCon San Francisco returns November 18-22, focusing on innovations and emerging trends you should pay attention to in 2024-2025. With technical talks from international software practitioners, QCon will provide actionable insights and skills you can take back to your teams. Save now with our special launch pricing!

Provisional topics include:

  • Architectures You’ve Always Wondered About
  • Innovations in Data Engineering
  • Connecting Systems: APIs, Protocols, Observability
  • Engineering Leadership for All
  • Cloud-Native Engineering
  • Platforms, People & Process; Delivering Great Developer Experiences
  • What’s Next in GenAI and Large Language Models (LLMs)
  • Emerging Trends in the Frontend and Mobile Development
  • Performance Engineering Unleashed: Powering Efficiency and Innovation
  • Securing Modern Software
  • Architecture for the Age of AI
  • Efficient Programming Languages

In addition to the conference, QCon will host two days of training on November 21 and 22, run by people who have mastered their skills and want to help you master yours. Join and experience focused training, hands-on learning, and step-by-step walk-throughs with domain experts.

Don’t miss the opportunity to level up on the skills most in demand to future-proof your career, with topics such as: Practical insights on implementing Team Topologies, AI/ML mastery, AI engineering: Building generative AI apps, mastering serverless, microservices bootcamp, architecting scalable APIs, Mastering cloud, K8s, and DevOps, building modern data analytics, soft skills for tomorrow’s technical leaders and more.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: From Runtime Efficiency to Carbon Efficiency

MMS Founder
MMS Michal Dorko

Article originally posted on InfoQ. Visit InfoQ

Transcript

Dorko: My name is Michal Dorko. I work as a software engineer for Goldman Sachs. I am part of the internal SecDb architecture team. We own an internal programming language called Slang. Slang will be our main topic that we’ll be discussing. Before we dive deeper into Slang language, let me introduce you to our SecDb ecosystem. SecDb is an ecosystem within Goldman Sachs which consists of object-oriented database technology, and data synchronizing mechanism. Also, the other core components of the SecDb are a language which we’ll be discussing called Slang, which is a scripting language. Integrated development environment for Slang called SecView, various front office processes such as trade booking system and trade model risk applications, quoting applications, and various integrated controls which are required by our regulators.

What is Slang? (Security Language)

What is Slang? Before we dive deeper into Slang, let’s first address a question which comes to your mind is, why do we have our own programming language? Slang was developed and designed early in the ’90s for financial modeling, before other popular scripting languages, such as Python were widely available. It was designed for mathematicians, financial modelers, with no computer science background to allow them to easily develop, implement, and deploy their models into production. It is based on C, because in those days, when Slang was created, C was the main language used for financial modeling. Therefore, creators of Slang wanted this language to be as similar and as familiar as C to our financial modelers who were using it. Slang features can be split into two main sections. Slang is a general-purpose language, but it has features that are specific to a language used for financial modeling in a financial institution. The general-purpose scripting language features are working with files, network connections, regular expressions, general data structures, and many others. Features which are quite specific to Slang would include tight coupling and integration with the SecDb database for accessing and analyzing SecDb objects and various built-in functionalities for performing quantitative calculations and creating financial models.

Slang Language Features

Slang is a case insensitive scripting language. It is dynamically typed, has a built-in graph framework for financial modeling for expressing dependencies between financial instruments. It has a built-in rapid deployment SDLC process, various built-in controls which are designed for financial institutions and are required by our regulators. Other very important features for this talk that I’d like to mention and are important to understand why we have been redesigning our runtime and why we have decided to write our own virtual machine, is that interpreter, datatypes, and all the built-in functions are all written in C++, and everything is written, implemented as a function. Slang has no keywords. Maybe a small interesting thing about Slang frontend is that Slang variables and functions can contain spaces. As I mentioned, all of the datatypes in Slang are implemented in C++. Our internal representation for this datatype, data structure holders are DT_VALUEs. DT_VALUE is what is known as a discriminated union. It’s a data structure which allows holding different datatypes. At each point during the runtime, this discriminated union, in our case, internal name DT_VALUE knows about what kind of data it’s holding. This information is stored in a member datatype as you can see here. Actual data is stored in a C style union, where we have a number. If the DT_VALUE contains number, it’s directly stored on the DT_VALUE for optimization purposes. Any other datatypes are storing simply pointers to more complex datatypes, which are allocated on the heap.

Slang in Numbers – How Much Slang Do We Run?

Let’s talk about the scale that we operate on when we talk about Slang. Currently, we have 200 million lines of code in production. We have more than half a million scripts. We have more than 4000 active developers on a daily basis. We spend more than 300 million compute hours per week running processes which are written in Slang. Slang itself is quite a complex language that has been evolving over the years since early ’90s. As at this point in time, we have several hundred built-in datatypes, and more than 10,000 built-in functions.

Slang SDLC (Rapid Prod Deployment)

I would like to briefly talk about SDLC for Slang, which is quite unique and allows users and developers to deploy their changes to production very quickly. Slang scripts themselves are stored as binary objects in a database. The processes which execute Slang connect to a database, load their scripts from database, and execute them locally. For this purpose, our SDLC has been also designed in a way where our user, developers of the Slang write their changes in what we call user database. It’s a development area. All the testing and implementation is done in the development user databases. Once a developer is happy with their change, they submit it through a review procedure using our internal review toolings. After appropriate approvals are granted, the changes are then committed to version control system which is used mainly for audit purposes. After the change is committed, it is then pushed into a production database, which is then replicated across the globe. After being pushed to production, the replication happens within a few seconds. Therefore, the change is then available for other developers, for processes which are just starting or restarting to pick up. This allows us to quickly fix any production issues, address any bugs, fixes, but also simply iterate on our software that we are writing.

Current Slang Runtime (Tree-Walker Interpreter)

Let’s now talk about current Slang Runtime which is implemented as a tree-walker interpreter. The current Slang runtime is the tree-walker interpreter. Slang source code is parsed into an abstract syntax tree which is then directly executed, and each node in the tree knows how to evaluate itself and knows about its children. This is our internal representation of a SLANG_NODE. It’s a C struct. As I mentioned, Slang was designed in the early ’90s, and still has a lot of artifacts from early C days. As you can see, each SLANG_NODE knows what is the type of this node. It knows enum about how many children it has. There is a function pointer which is the implementation of the node which is essentially executed and handles the implementation and execution logic of the node. Then it has all the actual children of the nodes. We also store error info, which is essentially our source information such as script name, line number, where this abstract syntax tree node has been present in a script and when we were parsing a script.

Let’s take this simple example. We see at the top, variable var equals x plus y. This simple example is parsed into the abstract syntax tree which we see below. In our current tree-walker interpreter, we essentially start at the top level, which is assignment operator, then would walk to the left and execute the variable, which has appropriate FnVariable function that handles creation of local variable. The next step in the interpreter would be to step to the binary operator for addition operator plus. Again, operator plus takes two arguments, it’s left-hand side and running some operand which are represented as children to this node, and so, the interpreter will then go to the node on the left-hand side which is for variable x, evaluate it. It will then go to the variable y, so the right-hand side node with the plus operator, evaluate it. Once these nodes are evaluated, interpreter will return the result of the evaluation back to the parent node, in this case, binary operator plus the plus operator. The function for the binary operator will process the results of the children, apply any logic which is applicable to binary operator and propagate the result back to the operator assign, which also takes the result from a local variable creation, and then interpreter will finish interpreting this simple expression at the top level.

Previous Attempts (Failure is Simply the Opportunity to Try Again)

What have we tried previously to improve our runtime? So far, we have tried few things. We’ve tried to lower our runtime into lisp. We tried CSlang, which is our codename for compiled Slang. It was an attempt to directly compile it and emit assembly instructions from Slang. Our recent attempts were in the space of TruffleSlang, so it was hosting Slang on GraalVM via Truffle framework. However, the challenge we faced is that, as we discussed earlier, there are 200 million lines of Slang, there are several hundred built-in datatypes, tens of thousands built-in functions which have no real specification, don’t follow any standard, and they have been developed over 25 years of history of Slang. This makes it impossible for us to do a big bang migration to alternative runtime. The main reason why we failed to migrate it and adopt alternative runtimes such as GraalVM was that the hosting on Graal becomes prohibitively complicated and expensive, mainly when it comes to boundary crossings. The JVM and GraalVM has a very good C interoperability. As we mentioned, most of the Slang runtime, all of the datatypes, all of the add-ins, all of the functionality in the current interpreter are built in C++. GraalVM doesn’t have a good interoperability with C++ due to things such as virtual tables.

Why Can’t We Use LLVM? (Universal Solution to All Our Problems)

Another obvious question is, why can’t we use LLVM? There is a fundamental mismatch between the strengths of LLVM which is targeting statically typed languages, but Slang is a very extremely dynamic language. The types in Slang can be defined at runtime, their behavior can be modified in runtime. Every time we are operating on any variable in Slang, we need to dispatch calls via implementation of its datatypes. This makes it very difficult for us to merge into LLVM semantics. It’s the same reason why other open source scripting languages like Python, Ruby, and JavaScript, which are a similar dynamic in nature to Slang, don’t use LLVM for their runtime.

SlangVM (Semantic Bytecode)

This brings us to the SlangVM, our internal virtual machine we’ve been working on. SlangVM is implemented as a stack-based bytecode interpreter. It shares few aspects of implementation like current tree-walker interpreter, we share the type system, variable representation, and Slang stack frame representation. The compilation is a purely additive step. SlangVM and compiler does not discard, destroy, modify the current abstract syntax tree. As a result of that, we can gracefully fall back to tree-walker interpreter, and we are unable to compile and evaluate expressions and code within the SlangVM runtime. This is how our current compilation pipeline looks like. We have a Slang source code which is parsed by a Slang parser into abstract syntax tree, which can be directly interpreted and executed by a tree-walker interpreter. In addition to that, we have a Slang VM compiler, which emits SlangVM compatible bytecode. This bytecode is then installed into the virtual machine that executes our bytecode. In cases where we are unable to compile the current abstract syntax tree, or we are unable to execute the bytecode, we have a simple graceful fallback where VM calls back into tree-walker interpreter, and tree-walker interpreter executes the abstract syntax tree, and hence execution then back to virtual machine.

SlangVM operates on the bytecode. The bytecode is represented as an array of bytes. Bytes are laid out in memory as a series of instructions by zero or more arguments. Each opcode has its own argument handling. We support few datatypes natively, such as integral types and opcodes, various addresses and jump offsets, and constant indexes. Any other datatypes are stored in a constant pool, that constant index points to. Now let’s have a look at a few examples on the right-hand side. For example, OP_ADD, it’s an opcode for performing addition. It’s a single opcode which takes no arguments. It takes a space of 8 bits or 1 byte. Because OP_ADD operates on a stack, it loads two values from the top of the stack and adds them. It doesn’t take any argument. Therefore, in the memory, it will be immediately followed by another opcode. If you take, for example, another opcode, OP_JUMP_SHORT for performing short jumps, this opcode has one argument, so the memory layout would be, we would have 1 byte, 8 bits for OP_JUMP_SHORT, which will be then followed by another 1 byte, and that would be argument to the jump offset. Therefore, interpreter would first read the opcode, which it would interpret as OP_JUMP, and it would automatically know that this opcode has one argument associated with, so therefore VM would read another 1 byte and interpret that as the offset for the opcode, which is the jump offset, essentially representing how much we want to jump in our bytecode.

Let’s take a look at our example, which we’ve seen previously, our variable var equals x plus y. If we take this expression and compile it into SlangVM bytecode, we will be producing bytecode, which you can see on the left column. This is how they would be laid out in memory. Our first opcode is an opcode for reading local variable. It consists of the opcode and one argument. Another opcode is OP_READ_VARIABLE for reading the actual value of the variable that’s pushed onto the stack that consists only of the opcode as it operates on the stack, doesn’t take any argument. Therefore, it will be immediately followed by our OP_READ_LOCAL_VARIABLE, again, takes two arguments, its opcode and an index to a local variable structure, which is then followed again by a single opcode for reading variable OP_ADD. Again, as we’ve seen in the previous example, doesn’t take any arguments as it operates on the stack so it’s a single opcode. Then we have remaining three opcodes out of which OP_ENSURE_LOCAL_VARIABLE is again opcode which takes a single argument. It’s represented as an opcode followed by single arguments. The remaining two instructions are single opcodes, they don’t take any arguments. This would be the layout in memory in a bytecode stream. For this example, the main benefit we have over the tree-walker interpreter is that the control flow never leaves the main interpreter loop. The main benefit we gain from the simple compilation into bytecode and locating the [inaudible 00:19:34] in bytecode is the compact representation and improved locality, so we benefit from CPU caching on modern CPU architectures.

Value Stack

SlangVM is a stack-based virtual machine. The value stack is a fixed size array, and most of the stack manipulation is done as a side effect of an operation. We store DT_VALUEs, our discriminated unions, which store all our Slang internal datatypes, directly on our stack, therefore, the SlangVM can operate directly on the DT_VALUEs without any further translation and serialization or deserialization. Let’s take a look at another example. Now we have a little bit more complex expression, we have x equals 42. Then we have a variable y, which is x plus 1. When we take these two simple statements, they will be compiled into bytecode, which we can see on a screen. Now we’re going to simulate the execution of the VM and the state of the stack. For the start read, when we execute the OP_READ_CONSTANT, we simply read the constant value of 42, and pushes this value onto the value stack. Next instruction reads the local variable. It ensures that the variable exists. If it doesn’t exist, then pushes the reference of like a pointer to a variable onto the stack. Next opcode, OP_ASSIGN_VAR, consumes the values from the stack and performs the assignments, so takes the variable x, takes the value 42, assigns 42 to the value of x and returns the value of this evaluation, which is the value 42, and pushes it back onto the stack. All expressions in Slang return a value, therefore, even assignment to a variable returns the value back. Therefore, we need to push back 42 onto the stack. Because we don’t do anything with this value, we then simply pop it off to the stack.

Next, we are on the second line of code where the first instruction corresponding to that line of code reads the local variable x and pushes it onto the stack. Now we need to read the actual value of that variable. Then the interpreter will look at the value of the variable and then push the value of the variable which is 42 that we assigned on the previous line of code. Then the next opcode reads the constant, this corresponds to the value 1. It simply reads the value 1 and pushes on to the stack, so now we have value 1 and value 42 on our stack. Now our OP_ADD reads two top values from the stack and performs addition, then pushes the value of the result of this addition back onto the stack. Now at this point, the value of the state of our stack looks like this, where we have a value 43 at the top of our stack. Now, we are going to perform the assignment of this addition. The next opcode for ensuring local variable simply again creates if the local variable doesn’t exist. If it exists, we simply push it onto the stack. Then we perform the assignment, so we take the variable y and variable 42 off the stack, and then put a result onto the stack. This is the value of the stack, and then the IMPLICIT_RETURN simply is everything in Slang, as I mentioned, has to return value. Then we compile this small expression, we implicitly return the last value, because we don’t do anything with it, we simply pop it off of the value stack.

Virtual Machine

The main core part of our virtual machine is a loop, which simply goes through the bytecode, reads opcode by opcode, and operates on this opcode. We have a huge if switch statement, it essentially switches on each opcode and we have handling for each opcode. The main challenge we were facing is to integrate this with our tooling. As I mentioned, we have integrated development environment for Slang called SecView. SecView has a built-in debugger, as well as profiler. The main challenge we’ve been facing with migrating to SlangVM is to make sure that we are able to support all the debugging tooling, and all of the profiling and metrics that current tooling supports. The main challenge for addition to the interpreter was to implement profiling and debugging. As you can see, on the screen, it can be simplified as a pseudocode. We first read the opcode. Then if we are profiling, we need to collect profiling data. If we’re debugging, we need to go trigger the debugger, handle user interaction with the debugger, and then we interpret the opcode. If you’re in a debugging loop, you read each time, same for profiling. To signify how important and how complex the integration with existing tooling was, we have been investing heavily over the past few months. We spent more than half a year integrating SlangVM with the tooling, making sure we support all the debugging features which our user, our developers are familiar with. Make sure that the profiling is also providing correct data, presentable to our users in a format which they are familiar with.

Future Enhancements

Our goal for future enhancements in this space would be persisting a bytecode to a database. As I mentioned, currently, we store the scripts as binary data in a database, and the interpreter process loads the script from a database and interprets it. The goal would be to store actual compiled bytecode or some form of intermediate representation in the database, and it will allow us to remove the compilation and parsing step which would improve the performance as the client will load the already compiled bytecode from a database, and it will execute it directly. Another feature which we are planning to implement in the near future is implement a proper control-flow graph to enable us perform optimizations at compile time. We would also like to consider and implement optimizations such as elimination of unreachable code, compression of the bytecode, and constant folding and propagation. Ultimately, what we would like to target is compile our semantic bytecode, our SlangVM bytecode to a lower-level language, such as directly support the x86 assembly.

Benchmarks

How do we ensure that our SlangVM is faster than the current tree-walker interpreter? We’re running a series of benchmarks consisting of three primitive benchmarks, then we have few microbenchmarks such as Spectral Norm, [inaudible 00:27:58], Merge Sort, purely written in Slang. Factorial, Fibonacci, Is Prime, and few other microbenchmarks, in addition to running examples of real-world pricing application and financial models from our users to ensure that SlangVM outperforms our current tree-walker interpreter in the current version. Then we will be ensuring that any further optimizations which we are about to make and plan to make in the future will even further improve the performance of SlangVM over the current tree-walker interpreter.

Expectations (Faster than Light)

Our expectation is that at the initial release of the VM without any optimization, simply compiling AST parse tree into bytecode, we are aiming to achieve at least 10% improvement against the baseline. Per process that would translate into 5% as, on average, we spend 50% of overall process runtime inside the interpreter loop. The remaining 50% would be spent in the native C++ code in native implemented functions and add-ins of various I/Os, be it a file system or network I/O. Further down the line, in a few years’ time when we actually implement all our optimizations, we are expecting to be two times faster than our current tree-walker interpreter, and we expect that to translate to 25% improvement for our average processes. Once we get to the stage where we are able to JIT our bytecode into native assembly, we expect to have a more than 10% speedup against baseline Slang. This will translate into our estimated reduce in the compute hours that we spend by 135 million compute hours per week. We currently spend more than 300 million compute hours per week running Slang processes. Therefore, our estimation, once we hit the five-year mark, and all our optimization, and JITing to native assembly is achieved, we will reduce our footprint by 135 million compute hours per week.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to Develop a Culture of Quality in Software Organizations

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

According to Erika Chestnut, software organizations can develop a culture of quality with a clear commitment from leadership, not only to endorse quality efforts in software teams, but also to actively champion them. This commitment and advocacy should manifest in data-driven decision-making that strikes a balance between innovation and quality, ensuring that we maintain the highest quality of existing offerings, as well as delivering high-quality innovation.

Erika Chestnut gave a keynote about developing a quality culture at Testing United 2023.

Strong quality leadership is essential for fostering a culture of quality throughout the software company, Chestnut mentioned. Effective communication and regular training on quality standards are crucial for empowering employees to champion a culture of quality.

Chestnut stated that a culture of quality encourages innovation, as employees feel supported and empowered to explore new ideas and solutions, knowing that the organization values high standards and continuous improvement.

Product innovation without emphasis on maintaining high product quality can compromise customer satisfaction and the software company’s reputation, Chestnut said. Quality efforts often suffer when they are not aligned with business outcomes, leading to their perceived lack of value.

A culture of quality within a software organization is characterized by a pervasive and steadfast commitment to excellence in every aspect of its operations. In such a culture, quality is not just a department or a set of procedures, but a core value that informs decision-making at all levels, as Chestnut explained:

Leadership consistently demonstrates and communicates the importance of quality, setting a clear example and expectations. Employees across the board understand and take ownership of their role in maintaining quality, feeling empowered and responsible for upholding high standards.

A lack of data-driven decision-making hinders the ability to maintain and improve quality over time, as data and analytics are vital for identifying improvement areas and tracking progress, Chestnut mentioned:

A data-driven approach underpins the quality initiatives, ensuring decisions are informed by accurate and meaningful insights.

Software organizations with a clear culture of quality often experience a significant boost in customer satisfaction and loyalty, Chestnut said. This often leads to increased market share and a stronger brand reputation.

Internally, a quality culture fosters higher employee morale and engagement. Employees take pride in their work and are motivated by the knowledge that they are contributing to a product or service of high standard, Chestnut mentioned. This often results in lower turnover rates and attracts top talent who are eager to work in an environment where excellence is valued.

InfoQ interviewed Erika Chestnut about developing a quality culture in software organizations.

InfoQ: How do we build a quality paved path?

Erika Chestnut: A paved or “defined” path to quality improves the culture of quality across your entire organization. It is a living document and collection of the best practices, procedures, guidelines, and tools that are unique to each organization. This document makes it easier for teams to produce high-quality outcomes as it aims to reduce friction, remove obstacles, and provide clarity for delivery teams.

When building a paved path, you want to collaborate deeply with stakeholders across the SDLC to elevate opportunities to infuse quality in all areas of your delivery flow, not just the development step.

InfoQ: How can we balance quality and innovation in software development?

Chestnut: It requires a strategic approach that acknowledges the interdependence of these two elements. The key is to integrate quality considerations into the innovation process from the outset, rather than viewing them as separate or competing agendas. This involves fostering a culture that values both creativity and attention to detail, encouraging teams to think innovatively while maintaining a focus on quality standards.

It’s crucial to establish flexible but clear guidelines that allow for experimentation and risk-taking, yet have robust quality checks and balances in place. Data-driven insights should guide both innovation and quality assurance decisions, helping to strike a balance where new ideas are tested and refined without compromising on quality. Ultimately, the goal is to create an environment where quality and innovation are not seen as trade-offs but as complementary forces driving the organization’s success.

InfoQ: What can be done to maintain a sustainable culture of quality?

Chestnut: Maintaining a sustainable culture of quality in an organization hinges on continuous leadership commitment and integrating quality into every aspect of the business. This effort involves regularly reinforcing quality values, celebrating successes, and learning from setbacks.

Key to this culture is actively engaging employees, encouraging their input and involvement in quality initiatives, and providing them with regular training and development opportunities.

In essence, a sustainable quality culture is fostered through a blend of strategic leadership, employee engagement, integrated processes, and continuous adaptation.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


abrdn plc Purchases 7,188 Shares of MongoDB, Inc. (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

abrdn plc boosted its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 46.0% during the 3rd quarter, according to the company in its most recent disclosure with the Securities and Exchange Commission. The institutional investor owned 22,803 shares of the company’s stock after acquiring an additional 7,188 shares during the period. abrdn plc’s holdings in MongoDB were worth $7,887,000 as of its most recent filing with the Securities and Exchange Commission.

Several other hedge funds and other institutional investors have also recently added to or reduced their stakes in the business. Simplicity Solutions LLC lifted its holdings in shares of MongoDB by 2.2% in the second quarter. Simplicity Solutions LLC now owns 1,169 shares of the company’s stock valued at $480,000 after purchasing an additional 25 shares in the last quarter. Assenagon Asset Management S.A. lifted its holdings in shares of MongoDB by 1.4% in the second quarter. Assenagon Asset Management S.A. now owns 2,239 shares of the company’s stock valued at $920,000 after purchasing an additional 32 shares in the last quarter. Veritable L.P. lifted its holdings in shares of MongoDB by 1.4% in the second quarter. Veritable L.P. now owns 2,321 shares of the company’s stock valued at $954,000 after purchasing an additional 33 shares in the last quarter. Choreo LLC lifted its holdings in shares of MongoDB by 3.5% in the second quarter. Choreo LLC now owns 1,040 shares of the company’s stock valued at $427,000 after purchasing an additional 35 shares in the last quarter. Finally, Yousif Capital Management LLC lifted its holdings in shares of MongoDB by 4.8% in the third quarter. Yousif Capital Management LLC now owns 762 shares of the company’s stock valued at $264,000 after purchasing an additional 35 shares in the last quarter. 88.89% of the stock is currently owned by institutional investors.

Wall Street Analyst Weigh In

A number of brokerages have recently commented on MDB. Mizuho boosted their price target on MongoDB from $330.00 to $420.00 and gave the company a “neutral” rating in a report on Wednesday, December 6th. Bank of America began coverage on MongoDB in a research report on Thursday, October 12th. They issued a “buy” rating and a $450.00 target price on the stock. Capital One Financial raised MongoDB from an “equal weight” rating to an “overweight” rating and set a $427.00 target price on the stock in a research report on Wednesday, November 8th. UBS Group reaffirmed a “neutral” rating and issued a $410.00 target price (down previously from $475.00) on shares of MongoDB in a research report on Thursday, January 4th. Finally, Tigress Financial lifted their target price on MongoDB from $490.00 to $495.00 and gave the company a “buy” rating in a research report on Friday, October 6th. One investment analyst has rated the stock with a sell rating, four have issued a hold rating and twenty-one have issued a buy rating to the company. Based on data from MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and an average target price of $429.50.

View Our Latest Analysis on MongoDB

MongoDB Trading Up 6.6 %

Shares of NASDAQ:MDB opened at $436.01 on Friday. The company has a market capitalization of $31.47 billion, a P/E ratio of -165.16 and a beta of 1.24. MongoDB, Inc. has a 1 year low of $189.59 and a 1 year high of $442.84. The company has a debt-to-equity ratio of 1.18, a current ratio of 4.74 and a quick ratio of 4.74. The business’s 50 day simple moving average is $403.38 and its 200 day simple moving average is $381.58.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Tuesday, December 5th. The company reported $0.96 earnings per share for the quarter, beating analysts’ consensus estimates of $0.51 by $0.45. The firm had revenue of $432.94 million for the quarter, compared to the consensus estimate of $406.33 million. MongoDB had a negative return on equity of 20.64% and a negative net margin of 11.70%. The company’s revenue was up 29.8% compared to the same quarter last year. During the same quarter last year, the business posted ($1.23) EPS. On average, research analysts anticipate that MongoDB, Inc. will post -1.63 EPS for the current fiscal year.

Insider Buying and Selling at MongoDB

In related news, Director Dwight A. Merriman sold 2,000 shares of the firm’s stock in a transaction on Tuesday, November 7th. The stock was sold at an average price of $365.30, for a total transaction of $730,600.00. Following the sale, the director now owns 1,189,159 shares of the company’s stock, valued at approximately $434,399,782.70. The transaction was disclosed in a document filed with the SEC, which is available at this hyperlink. In related news, Director Dwight A. Merriman sold 2,000 shares of the firm’s stock in a transaction on Tuesday, November 7th. The stock was sold at an average price of $365.30, for a total transaction of $730,600.00. Following the sale, the director now owns 1,189,159 shares of the company’s stock, valued at approximately $434,399,782.70. The transaction was disclosed in a document filed with the SEC, which is available at this hyperlink. Also, CEO Dev Ittycheria sold 100,500 shares of the firm’s stock in a transaction on Tuesday, November 7th. The stock was sold at an average price of $375.00, for a total value of $37,687,500.00. Following the transaction, the chief executive officer now directly owns 214,177 shares of the company’s stock, valued at $80,316,375. The disclosure for this sale can be found here. In the last quarter, insiders have sold 144,277 shares of company stock valued at $55,549,581. 4.80% of the stock is currently owned by corporate insiders.

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NoSQL Market Share, Size, Demand, Industry Statistics and Research Report 2024-2032

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

BROOKLYN, NY, USA, February 1, 2024 /EINPresswire.com/ — According to IMARC Group, the global NoSQL market size reached US$ 9.5 Billion in 2023. Looking forward, IMARC Group expects the market to re…

“Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute
irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia
deserunt mollit anim id est laborum.”

Section 1.10.32 of “de Finibus Bonorum et Malorum”, written by Cicero in 45 BC

“Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium
doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore
veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam
voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur
magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est,
qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non
numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat
voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis
suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum
iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur,
vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?”

1914 translation by H. Rackham

“But I must explain to you how all this mistaken idea of denouncing pleasure and
praising pain was born and I will give you a complete account of the system, and
expound the actual teachings of the great explorer of the truth, the master-builder
of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it
is pleasure, but because those who do not know how to pursue pleasure rationally
encounter consequences that are extremely painful. Nor again is there anyone who
loves or pursues or desires to obtain pain of itself, because it is pain, but
because occasionally circumstances occur in which toil and pain can procure him some
great pleasure. To take a trivial example, which of us ever undertakes laborious
physical exercise, except to obtain some advantage from it? But who has any right to
find fault with a man who chooses to enjoy a pleasure that has no annoying
consequences, or one who avoids a pain that produces no resultant pleasure?”

1914 translation by H. Rackham

“But I must explain to you how all this mistaken idea of denouncing pleasure and
praising pain was born and I will give you a complete account of the system, and
expound the actual teachings of the great explorer of the truth, the master-builder
of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it
is pleasure, but because those who do not know how to pursue pleasure rationally
encounter consequences that are extremely painful. Nor again is there anyone who
loves or pursues or desires to obtain pain of itself, because it is pain, but
because occasionally circumstances occur in which toil and pain can procure him some
great pleasure. To take a trivial example, which of us ever undertakes laborious
physical exercise, except to obtain some advantage from it? But who has any right to
find fault with a man who chooses to enjoy a pleasure that has no annoying
consequences, or one who avoids a pain that produces no resultant pleasure?”

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Major Indexes Hit New Highs; MongoDB, Axon Enterprise, Synopsys Flash Buy Signals

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Notice: Information contained herein is not and should not be construed as an offer, solicitation, or recommendation to buy or sell securities. The information has been obtained from sources we believe to be reliable; however no guarantee is made or implied with respect to its accuracy, timeliness, or completeness. Authors may own the stocks they discuss. The information and content are subject to change without notice. *Real-time prices by Nasdaq Last Sale. Realtime quote and/or trade prices are not sourced from all markets.

© 2000-2024 Investor’s Business Daily, LLC All rights reserved

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


New York State Common Retirement Fund Sells 11367 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

New York State Common Retirement Fund lessened its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 8.2% in the 3rd quarter, according to its most recent filing with the SEC. The fund owned 127,650 shares of the company’s stock after selling 11,367 shares during the period. New York State Common Retirement Fund owned approximately 0.18% of MongoDB worth $44,149,000 at the end of the most recent reporting period.

Other hedge funds have also recently added to or reduced their stakes in the company. Raymond James & Associates increased its holdings in shares of MongoDB by 32.0% during the 1st quarter. Raymond James & Associates now owns 4,922 shares of the company’s stock worth $2,183,000 after purchasing an additional 1,192 shares during the period. PNC Financial Services Group Inc. lifted its position in shares of MongoDB by 19.1% in the 1st quarter. PNC Financial Services Group Inc. now owns 1,282 shares of the company’s stock worth $569,000 after acquiring an additional 206 shares during the period. MetLife Investment Management LLC bought a new stake in shares of MongoDB during the first quarter valued at approximately $1,823,000. Panagora Asset Management Inc. boosted its stake in MongoDB by 9.8% during the first quarter. Panagora Asset Management Inc. now owns 1,977 shares of the company’s stock worth $877,000 after buying an additional 176 shares in the last quarter. Finally, Vontobel Holding Ltd. increased its holdings in MongoDB by 100.3% in the 1st quarter. Vontobel Holding Ltd. now owns 2,873 shares of the company’s stock valued at $1,236,000 after buying an additional 1,439 shares during the period. Institutional investors own 88.89% of the company’s stock.

MongoDB Trading Up 2.1 %

NASDAQ:MDB opened at $409.07 on Friday. The company has a debt-to-equity ratio of 1.18, a current ratio of 4.74 and a quick ratio of 4.74. The business’s 50-day moving average price is $402.81 and its two-hundred day moving average price is $381.39. MongoDB, Inc. has a 1-year low of $189.59 and a 1-year high of $442.84.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings data on Tuesday, December 5th. The company reported $0.96 EPS for the quarter, beating analysts’ consensus estimates of $0.51 by $0.45. The business had revenue of $432.94 million during the quarter, compared to analysts’ expectations of $406.33 million. MongoDB had a negative net margin of 11.70% and a negative return on equity of 20.64%. The firm’s quarterly revenue was up 29.8% on a year-over-year basis. During the same quarter in the prior year, the company earned ($1.23) EPS. As a group, research analysts expect that MongoDB, Inc. will post -1.63 earnings per share for the current year.

Insider Activity

In other MongoDB news, Director Dwight A. Merriman sold 2,000 shares of the firm’s stock in a transaction that occurred on Tuesday, November 7th. The shares were sold at an average price of $365.30, for a total transaction of $730,600.00. Following the sale, the director now directly owns 1,189,159 shares in the company, valued at approximately $434,399,782.70. The sale was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this link. In other MongoDB news, Director Dwight A. Merriman sold 2,000 shares of the stock in a transaction dated Tuesday, November 7th. The stock was sold at an average price of $365.30, for a total transaction of $730,600.00. Following the completion of the transaction, the director now owns 1,189,159 shares in the company, valued at approximately $434,399,782.70. The transaction was disclosed in a filing with the SEC, which is available through the SEC website. Also, CAO Thomas Bull sold 359 shares of MongoDB stock in a transaction dated Tuesday, January 2nd. The shares were sold at an average price of $404.38, for a total value of $145,172.42. Following the completion of the sale, the chief accounting officer now directly owns 16,313 shares in the company, valued at approximately $6,596,650.94. The disclosure for this sale can be found here. Insiders have sold 144,277 shares of company stock worth $55,549,581 in the last 90 days. Corporate insiders own 4.80% of the company’s stock.

Wall Street Analyst Weigh In

MDB has been the topic of a number of recent research reports. Barclays lifted their price objective on MongoDB from $470.00 to $478.00 and gave the company an “overweight” rating in a report on Wednesday, December 6th. Stifel Nicolaus reiterated a “buy” rating and set a $450.00 price objective on shares of MongoDB in a research note on Monday, December 4th. UBS Group restated a “neutral” rating and issued a $410.00 target price (down previously from $475.00) on shares of MongoDB in a research note on Thursday, January 4th. DA Davidson restated a “neutral” rating and issued a $405.00 price target on shares of MongoDB in a research report on Friday, January 26th. Finally, TheStreet raised shares of MongoDB from a “d+” rating to a “c-” rating in a report on Friday, December 1st. One analyst has rated the stock with a sell rating, four have assigned a hold rating and twenty-one have given a buy rating to the stock. According to MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and a consensus price target of $429.50.

Check Out Our Latest Stock Analysis on MongoDB

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

10 Best Stocks to Own in 2024 Cover

Click the link below and we’ll send you MarketBeat’s list of the 10 best stocks to own in 2024 and why they should be in your portfolio.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.