Month: August 2023

MMS • David Grizzanti
Article originally posted on InfoQ. Visit InfoQ

Key Takeaways
- Staff+ engineering roles vary in scope and responsibility across companies, often using non-traditional skills for academically trained software engineers.
- Staff+ engineers often need help with where to focus their attention, how much hands-on work is enough, and how to juggle all their responsibilities.
- While engineering is a scientific discipline by nature, we can look at the life of an artist and the challenges they face to draw inspiration.
- Artists often face challenges like writer’s block and impostor syndrome, which are also frequently experienced by engineers.
- Techniques such as improving awareness of your surroundings, becoming a better listener, and constantly building new relationships are all methods that can help you become a more effective and compassionate staff+ engineer.
Achieving a staff+ engineering role is a considerable achievement that many engineers seek as the next step in their career growth. All staff+ positions are different, and precisely what your role entails can sometimes be murky.
Depending on the size of your organization, you could be the only staff+ engineer, or there may be dozens or hundreds of other engineers in similar roles to you. The number of staff+ engineers, the type of software you’re developing, and the management structure all play a part in shaping your role.
In this article, we’ll discuss the challenges that staff+ engineers can face and how our struggles are similar to those of artists. Specifically, we’ll look at the parallels between creating art, creating software, and dealing with organizational dynamics.
We’ll focus on such topics as:
- Leading without authority
- Dealing with psychological discomfort and impostor syndrome
- Finding and establishing relationships with your peers
- Bringing ideas from inception to completion
- Embracing discomfort, even failure, to grow
Before we dive into the challenges that staff+ engineers face, let’s review what shape these roles usually take within an organization and what the career ladder may look like for individual contributors versus managers.
Staff+ Archetypes
Will Larson’s Staff Engineer is a guide for many of us getting into the role of staff engineer or looking for guidance on how other individual contributors manage their careers. Larson identifies four common archetypes for staff engineers that he has encountered.
- The Tech Lead guides the approach and execution of a particular team. They partner closely with a single manager but sometimes with two or three managers within a focused area.
- The Architect is responsible for a critical area’s direction, quality, and approach. They combine in-depth knowledge of technical constraints, user needs, and organization-level leadership.
- The Solver digs deep into arbitrarily complex problems and finds an appropriate path forward. Some focus on a given area for long periods.
- The Right Hand extends an executive’s attention, borrowing their scope and authority to operate particularly complex organizations.
I have primarily operated in the Tech Lead role for my career, but I have also dabbled in Architect.
Outside of these archetypes, I’ve found that our roles can be vague and shaped by personal experience and organizational dynamics. If you’ve grown at your organization over time from a senior position to staff+, you probably are filling a different need/role than someone hired from outside the company. Especially for older companies with tech debt or legacy systems, historical context plays a huge part in how well you can operate and influence the company. Starting a job where you’re hired as a staff+ engineer can be particularly challenging. I’ll discuss a few ways of overcoming these challenges later in the article.
Parallels to Art
I recently read a book called The Creative Act: A Way of Being by Rick Rubin. Some of you may have heard of Rubin (he is an American record executive famous for his eclectic mix of artists across musical genres, such as hip hop, heavy metal, alternative rock, etc.). He has produced albums for the Beastie Boys, Run DMC, Metallica, etc.
I picked up this book because I was feeling a pull towards creative pursuits in my life. Which, to be honest, I’ve always struggled with as an engineer.
“To live as an artist is a way of being in the world. A way of perceiving. A practice of paying attention. Refining our sensitivity to tune in to the more subtle notes. Looking for what draws us in and what pushes us away. Noticing what feeling tones arise and where they lead.” – Rick Rubin
Get rid of the idea that you are not creative or a creator. Everything we make in our lives involves creativity, from writing software to writing documents. We’re not just technicians. Art and engineering are both about experimentation. Embracing ambiguity, performing discovery, and crafting a solution (either to a problem or crafting a masterpiece) is a process we all follow when solving problems or creating art.
Between the book and a few interviews I’ve listened to with Rubin, I’d like to draw on some of the lessons he shared and takeaways from a more artistic life that I think we can use to inspire our jobs as staff+ engineers.
Being open to new ideas
Rubin talks about this idea of creativity/clear thinking coming to us more effortlessly if you have a sensitive “antenna”. I want to touch on this and his discussion on always being curious and looking for clues/transmissions.
Have you ever sat in a familiar room with nothing to do (no TV, phone, tablet, music, etc) and noticed something you never saw before? Something on the ceiling or wall? Being bored is something we’ve lost, and with distractions always stealing our focus and attention, we’re not as attuned to notice subtleties in daily life. This state of awareness is something artists curate in order to draw inspiration in their daily lives and the world around them. How much more attuned to our team and a better listener could we be if we increased our awareness?
Try meditation if you struggle with maintaining focus or feel pulled towards multiple tasks simultaneously. If mediation is not for you, try leaving your phone in another room during the workday or practice being bored. I know that sounds counterintuitive, but give it a chance. Go outside and sit on a park bench for 20 minutes with no distractions and after a few minutes, you’ll realize how much more attune to your surroundings you’ve become.
Cultivate a beginner’s mindset
The next topic I’d like to touch on is cultivating a beginner’s mindset. A beginner’s mindset can be summarized as approaching problems as if you have no knowledge or experience in the given arena.
Spending years gaining experience and refining skills may constrain our imagination, creativity, and focus. Cultivating a “Beginner’s Mind” suggests that embracing this mindset can lead to acquiring new abilities, making wiser choices, and fostering empathy. The essence of a Beginner’s Mind lies in liberating ourselves from preconceived notions about the future, thus reducing the risk of stress or disappointment.
Adopting a beginner’s mindset proves beneficial for artists, allowing them to overcome creative blocks, initiate fresh ideas, and break free from self-imposed limitations. However, this mindset is broader than artists; engineers and less creative individuals can also benefit from it. Through years of dedicated practice and execution, our minds unconsciously develop recurring patterns, transforming them into mental shortcuts, rules, and best practices.
I’ve had success getting into a beginner’s mindset over the years by avoiding pre-judgment when learning new technologies and working with a beginner in the domain. Participating in exercises where you’re outside your comfort zone, asking questions, and listening shows your interest in learning but also helps you from jumping to conclusions about how you expect something should work based on experience.
Writer’s block
Do you often struggle to make progress on a challenging problem? Artists commonly struggle to break through creative or mental blocks, which we call writer’s block. In contrast, engineering challenges can take on a similar form. I want to discuss a few ways to tackle writer’s block head on.
Take small steps
I often feel overwhelmed by how large a task seems at the start. How I make progress and deal with the anxiety is to break up the work into manageable chunks – even scheduling time for myself in my calendar to work on it. For this post for instance, I knew how long I wanted it to be and a rough outline, so I said to myself, “write five minutes of content a day for five days next week”. Just write down whatever comes into your head for each topic and the following week, edit.
As writers, we often get caught in the trap of trying to write and edit simultaneously, making us feel stuck in the loop of writing and re-writing. Taking these small steps towards a larger goal can give you a sense of accomplishment each day and make the more significant task seem less overwhelming.
Eliminate distractions / change your environment
I’m a big fan of the concept of Deep Work, popularized by Cal Newport. A story in his book resonated with me about a novelist who flew round-trip to Tokyo, writing during the whole flight to Japan. After landing, he drank an espresso, turned around, and flew back, again writing the entire way, arriving back home after 30 hours, now with a completed manuscript in tow.
This story resonated with me because I felt the same every time I flew. Being forced to sit in one place with little distraction helps me clear my head and focus on a single task. Making a radical change to your environment and investing significant effort or money into supporting a deep work task can increase the task’s perceived importance.
Another thought exercise on this topic is how often we superficially consider solutions to a problem. For instance, we have a great idea in the shower but don’t allow ourselves more than a few minutes to dig in. We tackle issues “on the surface” but don’t get at the meat of the problem. Having the space and time to do this is where your best ideas shine.
Play with alternate mediums
Another “trick” I like to use for myself and my teams is changing how we communicate or think about a problem. One of the easiest and quickest ways to do that is to draw instead of talk about an issue.
Visual communication has gotten more complicated with virtual meetings, but it’s still beneficial. We tend to think things over in our heads and not write out a problem or draw it out. Both of these are excellent ways of forcing yourself to put down ideas to paper and work out the details.
If you’re familiar with the rubber duck paradigm in programming, you could also use this for drawing. Invite someone to listen to your idea, even if you don’t want or aren’t ready for feedback. Make sure to set the tone that this a safe space for ideas.
Be an anthropologist
Another area I often see folks needing help with is understanding and figuring out organizational dynamics and culture. It’s not surprising in large organizations (>few thousand engineers), but have also seen it in smaller organizations (>100 but < 1k). As companies create management and corporate structures, engineers often need help with what happens above their immediate line manager.
To help folks understand better, I take on the task of being an anthropologist. An anthropologist’s job is to study the lives of humans in past and present societies worldwide. Parallel the world to your company and get exploring! Get to know people in other parts of the business, folks who’ve been at the company a while who can share context and give background. Most often, this happens serendipitously over time, but I find getting to it quicker, and intentionally, often helps. When I started at a new company recently, I set out to conduct 1:1s with at least 20% of the folks working in my organization in my first 30-60 days. The 1:1s helped me come up to speed quickly. It also was an opportunity to learn, through the interview questions I mention below, how engineers and managers feel about their roles and what struggles and challenges I need to develop.
Overcoming Staff+ Challenges
In this section, I’ll describe how approaching our role as an artist can help with some of the challenges of staff+ positions. We’ll touch on the parallels we drew to being an artist and how those can help navigate some of the challenges in your role.
Leading without authority
Many of you may have read a great book on this topic by Keith Ferazzi, called Leading Without Authority. The area I’d like to touch on here specifically is co-elevation.
Co-elevation is key
Co-elevation is a mission-driven approach to collaborative problem-solving through fluid partnerships and self-organizing teams. When we co-elevate with one or more of our associates, we turn them into teammates.
How do you establish relationships to build self-organizing teams and co-elevate? We talked earlier about being an anthropologist to learn your organization’s and various teams’ inner workings. Use this exercise to build relationships with different folks across the organization. Build trust and credibility with them.
Develop a set of questions you use to ask during the time you have with them. These questions will help set the tone for the interaction, but keep it light to not make it seem like an interview. Your goal here is to listen, not talk! Be an active listener.
Interview questions
Set the tone by first getting context from the interviewee. Ask them to give a talk about what they do at the company (team, job function), and if comfortable, a little about themselves on a personal level. Next, gather feedback on how they feel about their role at the company. What is working well for them and what is most frustrating. Lastly, ask for advice. What should I be aware of that I may be missing?
Use this set of questions to establish your relationships. For the ones you think are valuable, stay in touch with those folks every few months to keep the connection active. These interviews can help you understand a beginner’s mind, seeing problems from a perspective you are not the expert in.
Peer Relationships
Another challenge is building on leading without authority and establishing solid relationships with peers. If you happen to be in an organization that doesn’t have a ton of other folks in your role, you may struggle with this.
A few suggestions on how or where to build peer relationships:
- Look outside your immediate team. If your company doesn’t have one, try to form a peer group of all the staff+ engineers in your organization.
- Look outside your company. There are a few communities where staff+ engineers gather to discuss common topics and share feedback. Rand’s Leadership Slack is an excellent place to start if you’re looking for folks in the industry outside your company.
Be sure to maintain and nurture relationships with folks you used to work with who may have grown into a similar role. I have a peer group that I meet with regularly who now work across a few companies to keep this going. It’s a great resource for bouncing ideas, holding myself up to date with the challenges faced across companies and industries.
A few areas I think this can help in our staff+ roles:
- Someone who compliments your skill set
- Someone you can “rubber duck” ideas with
- Someone who can commiserate with you on shared struggles
Impostor Syndrome
The topic of impostor syndrome comes up fairly often in our industry. I have often faced this, especially when starting a new company. Artists often feel impostors syndrome as well (look at all these famous artists I’m comparing myself too, how can I possibly create something new that would be as good as that?).
Despite outward success and a set of successful work, you still feel like a fraud. These feelings are natural and many people have them, so take some comfort in that. Allow yourself to fail and feel uncomfortable and exposed to the challenge. You can overcome self-doubt and this feeling by being open and authentic, and sharing your experience with others.
We often only see others’ successes, but know that in the background they have had setbacks too (rejected conference proposals, causing a production outage at work, etc.). Find colleagues you can celebrate success with and share each other’s failures. Remember to be open and authentic, no one is perfect.
Above all, find what works for you. Don’t assume that what someone else does will make you happy or help you overcome these feelings.
Embrace Discomfort
Artists, like us, suffer rejection quite often but you need to know you are good, your creation is worthwhile, and you will grow from experience.
For me, embracing discomfort is all about facing challenges you dislike and not taking the easy path. Tackle the complex problems, don’t “snack”. Avoid low-effort, low impact work where possible. These tasks may feel rewarding to check off your list, but never tackling something of substance will hurt in the long run. Often, high effort, high impact tasks are the most challenging, which may result in failure or humiliation. Public humiliation is not ideal and hopefully you work in an environment that embraces failure. Still, we need to be comfortable putting ourselves out there (like artists do with paintings), taking criticism, and using it to improve.
Don’t try something because you think you might fail. There are a lot of cliques about failing, but at the core of it, many of them are true. You learn from mistakes, big and small. Make time to try hard things, get your hands dirty, and put your creations out in the world.
Part of embracing this sort of work and discomfort is making time for it. Let’s discuss that in the next section.
Time Management Myths
I read this book recently about time management, Four Thousand Weeks, by Oliver Burkeman. It wasn’t really about time management, as much as a philosophical look at how much time we have in our lives. Four thousand weeks (assuming you live to be 80)!
The book gets at the core of how we should value our time and less about trying to squeeze every last minute out of the day to be more productive. In some ways, our obsession with productivity is making us more miserable. Your to-do list will never be done. We think if we can finish this project, presentation, etc., then we’ll be done. You’re never done.
We need to give ourselves time to think, daydream, and use the skills I discussed earlier (being an active listener and increasing the sensitivity of your antenna).
Make time for unplanned work in your day. Most of us wake up with a to-do list, jump on meetings, and immediately get caught up in our workday with email and Slack messages. Try to set aside some time when you have nothing planned. In this unplanned time, think about a problem you may have been trying to solve recently that doesn’t need an immediate solution. We talked before about changing up your environment or medium; use this time to do that. Go for a walk or draw, anything that could help you escape the usual trappings of your work day.
Conclusion
As we discussed, staff+ engineering roles vary in scope and responsibility across companies. Engineers in staff+ roles often struggle with where to focus their attention and how to juggle responsibilities. By comparing some of the struggles of a staff+ engineer to that of an artist, we’ve identified a few practices that can improve our focus, attention, and effectiveness as engineers.
Here are a few actionable things you can take away and try yourself:
- Don’t try to micromanage every aspect of your day
- Set aside 1-2 hours in the morning or afternoon (block your calendars) at least twice a week for unplanned work
- Build new relationships
- Try to meet at least one new person in your company once a month
- Join (or start) some form of peer mentoring program
- Work on improving active listening
- Mediation is not for everyone, but find something that works for you (walks, quiet time, etc.)
- Commit to spending 30 minutes a few times a week exploring this space to improve

MMS • Agazi Mekonnen
Article originally posted on InfoQ. Visit InfoQ

Mozilla Developer Network (MDN) has released AI Help, a beta tool aiming to streamline web developers’ interactions with the platform and provide problem-solving assistance. Only users with an MDN Plus account can access AI Help for now. So far, the community reaction has not been overly positive, and a number of GitHub issues have been reported.
AI Help leverages GPT-3.5 of OpenAI’s generative AI and MDN’s latest content to generate responses. It offers a chat interface that allows developers to ask questions directly on MDN and receive answers with related articles from MDN for contextual help. The tool taps into MDN’s extensive database, providing quick access to coding documentation, best practices and information.
A section in the launch blog post, titled “Under the Hood”, explains that AI Help leverages two technologies that work well together: embeddings for similarity search and adding generative AI. Embeddings are numerical representations of words and sentences that capture meaning and relationships, where similar words have closely situated vectors. When users ask questions, a new embedding is generated for the query, and a similarity search is conducted to identify the relevant content within the MDN (Mozilla Developer Network). Once the question and content are identified, generative AI is employed to extract the answer from the content.
The AI Help chat interface includes a simplified feedback reporting system to encourage a collaborative and user-driven approach. Users can find a “Report an Issue with this Answer” link within the chat interface that directs them to create an issue in MDN’s GitHub issue tracker. This process allows users to share concerns or inaccuracies, contributing to continuous improvement and refinement of the AI Help feature.
MDN had previously released AI Explain, another AI-powered tool designed to help readers understand and explain code blocks. However, following feedback from the community pointing out instances of incorrect answers, MDN has decided to pause AI Explain temporarily.
The introduction of AI Help and AI Explain has gained considerable attention within the developer community. A user named eveee, who opened a GitHub issue on the Yari repository (the platform code behind MDN Web Docs) highlighted a key concern, stating:
“MDN’s new “ai explain” button on code blocks generates human-like text that may be correct by happenstance or may contain convincing falsehoods. This is a strange decision for a technical reference.”
Another user called catleeball emphasized that instead of using AI Help, relying on domain experts to write the documentation is crucial:
“I strongly feel that the AI help feature is likely to cause much more damage than it would possibly help. I think the best path forward is to offer only documentation written by humans, ideally reviewed by people who have domain expertise.”
In response to the feedback, MDN acknowledges the input and believes that LLM technology is immature. Hence, the platform has decided to pause AI Explain while encouraging AI Help users to provide feedback using the GitHub link provided in each response.
Other products within this space include GitHub Copilot, Tabnine, and Replit’s Ghostwriter.

MMS • Laura Maguire
Article originally posted on InfoQ. Visit InfoQ

Subscribe on:
Transcript
Shane Hastie: Good day folks, this is Shane Hastie for the InfoQ Engineering Culture Podcast. Today I’m sitting down with Dr. Laura Maguire. Laura is a specialist in cognitive systems engineering. Laura, welcome. Thanks for taking the time to talk to us today.
Dr. Laura Maguire: Thank you, Shane, for having me.
Shane Hastie: My normal starting point is who’s Laura?
Introductions [00:24]
Dr. Laura Maguire: Well, I am a cognitive systems engineer, which is a fancy way of saying I look at how people think at work and I look at the challenging and often hidden aspects of how we operate in complex, fast-moving, uncertain, time-pressured work environments. And as a cognitive systems engineers, I am very interested in understanding all aspects of both the physical environment that we operate in, the tools and the automation that we work with and how that can either help or hinder our ability to perceive the world around us, and to reason about what those kinds of changes that we’re seeing, what that might actually mean for our goals and our priorities. And typically, cognitive systems engineers use all of this research and use all of this data to design better toolings and better work systems. And that can include things like procedures and practices as well as the design of teams, team structures, and task sequencing as well.
Shane Hastie: That almost sounds, dare I say, slightly mechanical. I’m biased and making assumptions, I was expecting to hear psychological safety and all of that, but no, you’re talking about processes, procedures, physical design. How do those all swing in together?
Explaining cognitive systems engineering [01:51]
Dr. Laura Maguire: So it’s both the physical and the cognitive world. So a lot of software design, interface design, interaction design is a part of what we do, but we think about the human situated at work. And so, you and I both live in a physical environment. We interact with our software and with our computers in a physical environment. So that’s going to be a big part of how do we design work systems that allow people to interact and to function at the highest level. Where a lot of cognitive systems engineering came out of was looking at high-risk, high-consequence type work environments, so these are places where the consequences of human error or of confusion or mistakes getting made, they’re quite high. There’s the loss of very expensive assets like the International Space Station. There’s the potential for environmental degradation, like perhaps on an offshore oil rig, or there’s the potential for catastrophic loss of life. A pilot in a cockpit carrying four or 500 passengers.
And so this history of how we look at work and how we handle and treat the cognitive aspects, the thinking aspects, the sense-making parts of work have carried through to the current applications of cognitive systems engineering, which is looking at all kinds of high-value work. And so I include software engineering in high-value work, in the sense that software engineers are responsible for building, maintaining and operating some of society’s critical digital infrastructure. And even if it’s not operating at that level, it is still services and software systems that we rely on for our comfort and for smooth collaboration and coordination, and those things are really important as well.
Shane Hastie: I was almost seeing, I suppose, two layers there. There’s the cognitive load on our engineers building the products, but also the products that they build going out into the world.
Dr. Laura Maguire: Yes, absolutely. So an example of what you’re describing there is something like an infusion pump in a hospital setting for a healthcare provider. That is a physical piece of equipment that is carrying out very important life-critical functions, but the software that’s involved there has to be kind of human and machine interaction happening there. And the way that that software is designed can either can confuse the operator or it can help the operator very quickly make sense of is this normal operating conditions? Is the task that I’m carrying out being done appropriately? Or is there something wrong and do I need to correct there? And so this thinking about the world in a very integrated way helps us to minimize the amount of error that takes place when we are working with technology mediated work.
Shane Hastie: You mentioned you’ve been doing some interesting research. Do tell.
Research into incident response and incident management in software service delivery [04:58]
Dr. Laura Maguire: Yes, so much of my work over the last six years has been looking at incident response and incident management in software service delivery. And so a lot of what I’ve been looking at there is how do software engineers typically notice that there is a problem within the system? And we have observability, we have dashboards, we have a lot of tooling that helps us to see anomalous or unexpected events within the system. And then we use a variety of sources to make sense of that and to think about what are our goals and priorities here? Do I know exactly what’s going on, and so I need to act very quickly to be able to repair a service like a performance degradation or a service outage? Or is this something that I’m not sure about, and so I’m going to gather some more information, maybe let the service be degraded for a little bit longer so that I can understand what’s happening?
And the reason why this is such an interesting area to study for cognitive systems engineers is that most large-scale software systems these days are continuously changing. They are very complex and adaptive in and of themselves, and so the amount of cognitive work that it takes and the types of cognitive work that it takes to maintain these systems gives us a lot of insight into how do people in other domains, other occupations reason about operating in these same kinds of complex systems. Because at the end of the day, software engineers are people. They have similar kinds of cognitive processes that you might see in some of those other kind of high risk, high consequence domains that I was describing.
And one of the things that I noticed over the last year in particular is the ways in which the stress and uncertainty and the volatility of what’s been happening within the tech industry, how that’s starting to influence and impact cognitive work, particularly how it affects attention, how it affects our ability to communicate and collaborate when there is fear about losing a job, when there is uncertainty about when the next round of layoffs might be coming in, and when there’s been a lot of emotions after a layoff or after a big reorg or restructuring. This can make it really hard for people to concentrate at work and to feel safe at work as well. And so this new approach that I’ve been thinking about is how do we take some of these same kinds of models and these same kinds of methods that we use to understand cognitive demands and how work can get very challenging and what overload means from a cognitive perspective and can we apply that to these very emotionally demanding times? And it turns out that you can.
Shane Hastie: What does that look like?
Voices from the research [08:00]
Dr. Laura Maguire: So there’s three main concepts from the field of cognitive systems engineering that really apply here. And the first one is around identifying boundary conditions. And when I was doing some of this research and I was talking to software engineers, they started telling me the ways in which things were impacting them. And if you don’t mind, I would love to share some of that with you-
Shane Hastie: Please.
Dr. Laura Maguire: … because I think they explain it a lot better than I could.
How am I feeling? It feels like now it’s just a matter of time. And even though I was relieved I wasn’t laid off, I got scared. I know there’s more layoffs coming and now I wake up every day wondering if today’s my day to get cut.
So a lot of fear there. And another engineer that I spoke with talked about the pressure that she felt to stay at work and be at work, even though she recognized she needed a break.
Honestly, I just want to be able to take a few consecutive days off at a time this summer with without feeling like I am totally screwing my team over or having to work twice as much before or after and not have everything fall totally behind on the project. I know that it doesn’t all rest on me and I trust my team, but I also just feel this huge responsibility that I’m letting them down if I’m not always there, and I can’t keep that up forever. I know that I need to take the time off and if I don’t do it’s actually going to hurt us in the long run, but even a single day feels nearly impossible right now.
Another theme that started to come up was the strategies that people are using to typically manage with very stressful circumstances. They weren’t really having the same effect.
I do feel okay most of the time, but I’ve been spending time with my friends, the friends that have also been laid off or also are struggling with anxiety. I know this is part of a larger cycle and it’s supposed to end sometime soon, but unfortunately it’s everywhere. Everywhere I look, like LinkedIn, Twitter, most of the media, everyone’s reporting it, and it feels like it’s affecting everyone everywhere almost.
What I started to notice from these interviews and from talking with different folks, that there was a very real impact. The emotional demands of the current environment was having a real cognitive impact on their ability to concentrate, their ability to focus attention, their ability to rest and recharge, to deal with the pressures of trying to do more with less in the current climate. And I think this is a very common feeling right now within the industry. And so I was looking at some of the strategies and some of the models that we use within cognitive systems engineering and thought this actually can be adapted to understand the emotional demands as well.
Pushing against the boundaries of stress and pressure [11:34]
The first strategy that I think is relevant here is a Danish researcher named Jens Rasmussen proposed this model for how we think about risk and how we think risk is managed over time, and he proposed this idea that we all operate within a trade space. And the boundaries of that trade space is things like cost and schedule, time and money are hard constraints that we have in the world. And then he proposed that the third boundary there was what he calls the line of technical failure, and what I see as our ability to cope. It’s our line of emotional failure. And what Rasmussen said is that we constantly have these pressures within modern work environments to then save time, save money, and so that exerts these pressures in this trade space and has you move closer to your own boundary, your own emotional coping boundary. And obviously crossing those boundaries is not a good thing, and so we often put in place countermeasures to keep us away from those boundary conditions.
And so the countermeasures for things like going over budget or over time is project check-ins, it’s work estimating. And those help us to understand where we are operating in that trade space. But when it comes to our emotional capabilities, like we heard in those examples, the countermeasures that we have, taking PTO, spending time with friends and family, being able to rest and recharge, those may not have the same effects. And so we’re constantly moving closer towards the boundary. And so the trouble with boundary conditions is that when we’re operating in normal conditions, these are typically well modeled. We understand the performance of the system within normal operating conditions, but the closer we get to the boundary, the less predictable it is, so we don’t know what impact the way a failure is going to happen, and we don’t know what impact that’s going to have on the rest of the system.
An example of this is when you start to hear people say, “Oh, I’m operating pretty close to the edge here,” or, “I’m getting pushed to my limit,” these are examples and these are indications that people are operating in their emotional boundary condition. Is that something that you’ve experienced in your career over the time, Shane?
Shane Hastie: Oh, yes. I was just thinking, this sounds to me a bit like what we would call burnout.
Dr. Laura Maguire: Yes, absolutely. So one of the things that I think is really great about this model for helping us to try and understand when we’re in these emotional boundary conditions is it gives us the sense of understanding when we’re in a drift to the boundary. And so we all have this abstract idea of, “Oh, I’m under pressure,” but we can start to look for and recognize signals that we might be getting closer to our boundary. So an example of some of those for me is when I start pouring myself a cup of coffee in the morning and setting it down and then completely losing it in the two minutes between when I’ve poured it and when I’ve actually sat down at my desk. It is I forget appointments more readily, I am a little bit more irritable, I’m late for things or projects start to get… I feel a lot of pressure to get them produced.
And so when I start to notice these signals, I can recognize I’m close to the boundary and it’s less predictable. If I start to have one or two more unexpected events that show up, which when we’re doing faster, better, cheaper, there’s typically surprises, then I may not be able to control or to manage for those events the same way I would under other kinds of normal operating conditions. So when I’m in these boundary conditions, I can start to say, “Hey, it’s actually really important for my overall performance or the stability of the system, the continuity of the work that my team is producing, that I’m able to push myself back away from that boundary.” Can you think of examples from your past work where you’ve been in these boundary conditions?
Shane Hastie: Oh, yes. I think of one in particular where I know I went way over the boundary condition. I ended up absolutely burned out and had to change jobs.
Crossing the boundary into burnout [16:11]
Dr. Laura Maguire: Yes. So that is a really excellent point, because what happens when you cross the boundary is it’s not clear whether you’re going to have a collapse like you just described, which is, “I cannot function in this environment anymore,” or whether you’re going to have graceful degradation, or whether you’re going to have potentially what cognitive systems engineer David Woods says graceful extensibility. So before we talk a little bit about that, I just want to acknowledge what you were saying there, is that getting pushed into the boundary conditions can often feel like we have no other choices, we can’t take time off. As we heard some of those engineers say, we start to feel like, oh, if I’m not pushing myself, the rest of my team is going to suffer around this.
But this is really where I think having this language about being in the boundary conditions and starting to try and recognize these signals, not only for ourselves but for the people that we’re working with, can help us to try and change some of that conversation because it implies that crossing that boundary is potentially imminent and that that is a signal that we may have collapsed. And if you have people collapsing and removing themselves, as you had to, from the work environment, then overall the whole performance of the system declines.
Shane Hastie: What can one do as the individual, but also what can leaders do?
Resilient teams manage load and support each other [17:44]
Dr. Laura Maguire: Before I get to what leaders can do, I want to talk a little bit about what the team can do together because as a system engineer, I say, “Yeah, we shouldn’t focus on the individual. We shouldn’t put this on an already stressed-out and overworked worker to say, well, you just have to be more vigilant about how you’re doing and you have to push back.” We usually say, “No, no, let’s go to the system,” and that is typically leadership who has more power, more authority, more ability to influence at a systemic level so that individuals aren’t having to push themselves so hard. However, given the current climate, we know that there are also some real hard constraints around how work has to get done in this moment. There’s a lot of pressure for many startups, many young companies. For all companies, there’s an economic pressure that’s very real.
But I want to talk a little bit about this idea of sharing adaptive capacity, which is a really strong principle of resilience engineering, cognitive systems engineering. And effectively, sharing adaptive capacity is thinking about how do we think about the actual capabilities of the system as a whole, not as a collection of individuals per se? Because when we think about ourselves as a collection of individuals, if I’m not doing okay one day and you’re actually feeling okay that day, then maybe we can lighten my load a little bit or you’re able to support me to bring me back up a little bit more without it taking away from your capacity, your resources in that moment. And so if we start to think about this across a whole team, that people are going to have good days and bad days, and if we can share that, then we can level out this performance variability across the team a little bit more.
And so this idea of sharing adaptive capacity, adaptive capacity, Woods would describe this as being a systems readiness or its potential to change how the system currently fits so that it can continue to fit changing surprises, anomalies and situations. And so effectively what this means is if I’m not having a great day and you’re having an okay day, can we adjust in that smaller timeframe? It’s not you taking over my job, but it’s helping each other on a more micro level so that we can smooth out that performance curve. And one of the things that is quite useful about this idea of sharing adaptive capacity is thinking about how it is that we typically work together and whether those are functional versus dysfunctional patterns.
And so typical patterns, when people reach a saturation point, this is what we see from the literature, is they cope with it in four different ways, and I’ll add a fifth one. But the four ways are they’ll either drop the tasks because they’re too overloaded, and so dropping the tasks means someone else on the team is going to have to scramble to pick it up because they have had no indication that this is going to happen and the work still needs to get done so someone’s got to pick it up. They’re going to defer it in time. So they’ll say, “Can we do this at a later point in time?” Which is usually quite an effective coping strategy if I as an individual and communicating and coordinating with the people around me, particularly when there’s work that’s very tightly sequenced or highly interdependent.
They will degrade the quality of the work, and so they’ll just do not as great a job on it, or they’ll delegate it. They’ll recruit some other resources to try and bring it in. And so those can be both functional and dysfunctional patterns depending on how much communication and coordination is happening across the system. And so the more that we can talk about, “Hey, I’m in my boundary conditions right now,” or, “I’m on a pretty fast drift towards my boundary, I’m going to need to deploy one of these strategies,” it gives us language and gives us the ability to ask for capacity from our team members or from our leaders as well.
I mentioned the fifth coping strategy, and that one is one that I find myself employing an awful lot and it doesn’t always work very well for me, and that’s digging in. And that’s typically gritting your teeth and dropping your shoulder and just really pushing into the work. And as you described earlier with your example from your work, that only lasts for so long, and so we have to be thinking about these other kinds of strategies that we can use that are overall better for the system performance.
Shane Hastie: So this means that on the team we’ve got to really focus on that communication, having the language, but then genuinely listening to each other and supporting each other, and possibly proactively asking each other, “How are you doing?”
Communicating our condition to fellow team members [22:49]
Dr. Laura Maguire: Yes, absolutely. So you’re bringing up a really good point there, which is the earlier the indication that there is a problem, that we’re in a drift, the more range of strategies that we have or the more options we have for how to cope with that. But it’s also really hard to say, “I’m not doing okay.” And we also think about ourselves as being like, “Okay, well I can get through this. I just need to buckle down and keep pushing.” But that often doesn’t work as well as being able to say, “I’m having a bit of a day today. These are the things I’m going to be working on, but I know I’m distracted or I’m preoccupied, or I’m feeling the effects of accumulated fatigue from not having time off, and so I just kind of want to let you know that I’m in that zone.”
Some teams will use their Slack icons to be able to say, “I’m in yellow, or I’m in red today,” as a little bit of a flag to tell others that they’re not doing that well. But to your point, communicating and being a little bit more explicit to the extent that we’re comfortable doing and saying, “I think this might be one or two days where I’m going to have degraded,” and I’ll say, “performance,” because we are people, not just performance machines. But being able to talk a little bit about what some of the pressures and the demands that we’re experiencing are helps us to reorganize that work across the team a little bit more readily.
Leadership can support dynamic reconfiguration [24:21]
So the third, you asked a little bit about what can leadership do, and from a researcher perspective, this is a really exciting idea for me. And it’s about this idea of dynamic reconfiguration, and that’s of the human resources, but also of thinking about how we organize and structure work more readily. And the reason that I find this very interesting is that increasingly the boundaries of the actual system, what’s your software? What’s my software? If it’s third party or if it’s an internal system, those need to be quite fluid and they need to be quite adjustable because of the interdependencies between different aspects of the system. And that also goes for different functional aspects of the team. So customer support and the SRE team may need to be able to dynamically reconfigure to say, “Hey, we’re getting a lot of information from our customers about a problem that they’re seeing. It’s quite slow for us to use these ticketing systems from customer support. Maybe we can start to pull them into our Zoom calls when we’re having an incident.”
Or someone told me a story recently about their company deciding to run a Super Bowl commercial, and they dynamically reconfigured within the organization and said, “Hey, this is a very integrated all-company approach, because what happens from the marketing side of things and our customer support side of things is going to be very integral to the system performance as well,” and so they started to work more closely and more collaboratively across the team. So I think that in terms of what leadership can do, I think getting a little bit more flexible in terms of how to exchange resources from different teams, how to think about the ways in which things like deferred maintenance or choices about what features to prioritize, what the downstream and upstream implications of those are going to have on the wellbeing of the team and the amount of pressure and stress that they feel, those factoring in the emotional aspects of the software engineering can help to alleviate some of that pressure. Even though we are going to have to be working hard, we are still in quite a constrained environment.
And then the last thing that I will say about this dynamic reconfiguration is my hope for industry is to start thinking about intra-organizational collaboration as well. So I talked a little bit about the ways in which our dependencies create this need for a different kind of organizing that may not typically be the standard. And in my dissertation research, I actually saw a number of examples where engineers from both sides, and I was studying incidents predominantly, and so engineers on the vendor side and engineers on the client side were trying to independently debug what was happening. And some of the things that they were asking for, “Can you run these diagnostic tests for us?” were actually exacerbating the problem itself.
And so instead, they began to say, “Okay, we need to start thinking of ourselves as one unit, not as either sides,” and so they would break down those walls and start to interact more directly. And these are the kinds of changes that have to take place in order to respond in real time to very real issues. And so my hope is that as an industry, we start to think about how some of these barriers around how we work together may not actually be in the best interest of the cognitive and emotional parts of that work for the people involved.
Shane Hastie: We’ve been hearing more and more recently about cognitive load. This is an early indicator of, I hope, ongoing conversations about emotional load and how we deal with that deal. There’s a lot here, a lot to unpack. If our readers, listeners want to continue the conversation and hear more, where do they find you?
Dr. Laura Maguire: I am on Twitter @LauraMDMaguire. I’m on ResearchGate. You can take a look at some of the work that I’ve done there. And I am also on LinkedIn, and if there are things in the conversation that have really resonated for people, I would love to continue the conversation.
Shane Hastie: Laura, thanks so much. Really appreciate your taking the time to talk to us today.
Dr. Laura Maguire: Thank you, Shane. It’s been a pleasure.
Mentioned
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Janney Montgomery Scott LLC increased its position in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 79.8% in the first quarter, according to its most recent disclosure with the Securities and Exchange Commission. The firm owned 2,718 shares of the company’s stock after acquiring an additional 1,206 shares during the quarter. Janney Montgomery Scott LLC’s holdings in MongoDB were worth $634,000 at the end of the most recent quarter.
A number of other hedge funds and other institutional investors have also bought and sold shares of the company. Raymond James & Associates increased its holdings in shares of MongoDB by 32.0% in the 1st quarter. Raymond James & Associates now owns 4,922 shares of the company’s stock worth $2,183,000 after buying an additional 1,192 shares during the last quarter. PNC Financial Services Group Inc. increased its stake in MongoDB by 19.1% in the first quarter. PNC Financial Services Group Inc. now owns 1,282 shares of the company’s stock valued at $569,000 after acquiring an additional 206 shares during the last quarter. MetLife Investment Management LLC purchased a new stake in MongoDB during the first quarter valued at about $1,823,000. Panagora Asset Management Inc. lifted its stake in MongoDB by 9.8% during the first quarter. Panagora Asset Management Inc. now owns 1,977 shares of the company’s stock worth $877,000 after purchasing an additional 176 shares during the last quarter. Finally, Vontobel Holding Ltd. increased its position in shares of MongoDB by 100.3% during the 1st quarter. Vontobel Holding Ltd. now owns 2,873 shares of the company’s stock valued at $1,236,000 after purchasing an additional 1,439 shares during the last quarter. Institutional investors own 89.22% of the company’s stock.
Insider Activity
In other news, CRO Cedric Pech sold 15,534 shares of the stock in a transaction that occurred on Tuesday, May 9th. The shares were sold at an average price of $250.00, for a total transaction of $3,883,500.00. Following the sale, the executive now owns 37,516 shares in the company, valued at $9,379,000. The transaction was disclosed in a filing with the SEC, which is available at this link. In other MongoDB news, CAO Thomas Bull sold 516 shares of the business’s stock in a transaction on Monday, July 3rd. The shares were sold at an average price of $406.78, for a total transaction of $209,898.48. Following the completion of the sale, the chief accounting officer now directly owns 17,190 shares in the company, valued at approximately $6,992,548.20. The sale was disclosed in a document filed with the SEC, which is available through this link. Also, CRO Cedric Pech sold 15,534 shares of the firm’s stock in a transaction on Tuesday, May 9th. The stock was sold at an average price of $250.00, for a total value of $3,883,500.00. Following the completion of the transaction, the executive now owns 37,516 shares in the company, valued at $9,379,000. The disclosure for this sale can be found here. Insiders sold 114,427 shares of company stock worth $40,824,961 over the last three months. 4.80% of the stock is currently owned by corporate insiders.
MongoDB Stock Performance
Shares of MDB stock opened at $402.80 on Friday. The company has a current ratio of 4.19, a quick ratio of 4.19 and a debt-to-equity ratio of 1.44. The company’s fifty day moving average price is $386.16 and its 200-day moving average price is $281.39. The stock has a market cap of $28.43 billion, a PE ratio of -86.25 and a beta of 1.13. MongoDB, Inc. has a 12 month low of $135.15 and a 12 month high of $439.00.
MongoDB (NASDAQ:MDB – Get Free Report) last announced its quarterly earnings data on Thursday, June 1st. The company reported $0.56 earnings per share for the quarter, topping analysts’ consensus estimates of $0.18 by $0.38. The business had revenue of $368.28 million for the quarter, compared to analysts’ expectations of $347.77 million. MongoDB had a negative net margin of 23.58% and a negative return on equity of 43.25%. The business’s revenue was up 29.0% compared to the same quarter last year. During the same quarter in the prior year, the company earned ($1.15) earnings per share. On average, equities analysts expect that MongoDB, Inc. will post -2.8 EPS for the current fiscal year.
Analysts Set New Price Targets
MDB has been the subject of several research reports. Robert W. Baird upped their target price on MongoDB from $390.00 to $430.00 in a research note on Friday, June 23rd. Stifel Nicolaus increased their target price on shares of MongoDB from $375.00 to $420.00 in a report on Friday, June 23rd. Needham & Company LLC boosted their price target on shares of MongoDB from $250.00 to $430.00 in a research note on Friday, June 2nd. Truist Financial upped their price target on shares of MongoDB from $365.00 to $420.00 in a research report on Friday, June 23rd. Finally, Barclays lifted their price objective on MongoDB from $374.00 to $421.00 in a report on Monday, June 26th. One analyst has rated the stock with a sell rating, three have given a hold rating and twenty have issued a buy rating to the stock. Based on data from MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and a consensus target price of $378.09.
Read Our Latest Stock Analysis on MDB
About MongoDB
MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

As the AI market heats up, investors who have a vision for artificial intelligence have the potential to see real returns. Learn about the industry as a whole as well as seven companies that are getting work done with the power of AI.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Joran Dirk Greef
Article originally posted on InfoQ. Visit InfoQ

Transcript
Greef: Why design a new database? There was a time when you could have any database as long as it was MySQL or Postgres. These systems took 30 years to develop. They were tried and tested, and people thought twice before replacing them. Then something happened. A wave of discoveries in the research around durability, efficiency, and testing grew more difficult for existing database designs to retrofit. At least, this was our experience of the impact of this research on our design decisions for TigerBeetle, and why we decided to start fresh. TigerBeetle is a new, open source distributed database, where some databases are designed for analytics, others for streaming, and still others for time series, TigerBeetle is designed from the ground up for tracking balances, to track the movement of value from one personal place to another. For example, to track financial transactions, in-app purchases, economies, or to switch payments, record trades, or arbitrage commodities like energy, and to do those with mission critical safety and performance. You can use balance tracking to model any business event. That’s because the way you track balances is really double entry accounting, which is the schema TigerBeetle provides as a first-class primitive out of the box. You can spin up the replicas of a TigerBeetle cluster with a single binary, and then use a TigerBeetle client to connect to the cluster to create accounts and execute double entry transactions between accounts with strict serializability.
TigerBeetle is designed for high availability with automated failover if the leader of the cluster fails, so that everything just works. We wanted to make it easy for others to build and operate the next generation of financial services and applications without having to cobble together a ledger database from scratch, or to execute manual database failover at 2 a.m. With a tightly scoped domain, we have gone deep on the technology to do new things with the whole design of TigerBeetle. Our global consensus protocol, local storage engine, the way we work with a network disk and memory, the testing techniques we use, and the guarantees that TigerBeetle gives the operator, first and foremost of which is durability. What is durability? Durability means that once a database transaction has been acknowledged as committed to the user, it will remain committed even in the event of a crash. It’s fine to lose a transaction if the transaction has not yet been acknowledged to the user, but once it’s been committed, it can’t be lost.
To achieve durability, a database must first write the transaction to stable storage on disk before acknowledging the transaction, so that after a crash, the transaction is still there. However, writing data to disk is an art or a science. Blog person papers have been written about how hard it is to get right. It’s tempting to use a file system and not a database, but writing to a file from an application when the plug can be pulled at any time is simply too hard, at least if you want consistency. We might think to ourselves, surely we know by now how to use rename to do atomic file updates. Then you hear this fresh report from three weeks ago that renameat is not linearizable on Windows Subsystem for Linux’s ext4. This is why our applications trust the database with durability to get this right at least in the face of sudden power loss, not to mention gradual disk corruption.
Mmap
There are at least three ways that a database can be designed to write data to disk. The first of these is mmap. Where you map the contents of a file into your program’s address space, to give the illusion that you’re working with pure memory instead of disk. However, Andy Pavlo and his students at Carnegie Mellon wrote an excellent warning on the pitfalls of mmap to motivate why mmap is not acceptable for database durability. Some databases do still use mmap. Since we’re going to cover many of the same pitfalls, as we go along, as we look at other designs, we won’t go into mmap further, except to shine a spotlight on the paper.
O_DIRECT and Direct I/O
Next after mmap. There’s O_DIRECT or direct I/O. This is where the database takes responsibility for working with a disk directly, so that writes and reads go directly from user memory to disk and back again, and they just bypass the kernels page cache. It’s much more work, but the database has full control over durability as well as caching. However, this is what Linus has to say. The right way is to just not use O_DIRECT. There is no valid reason for ever using O_DIRECT. You need a buffer whatever I/O you do, and it might as well be the page cache, so don’t use O_DIRECT. If we go with Andy Pavlo, and we steer clear of mmap, and if we don’t rebel against the PDF file on O_DIRECT, what else is left? Here we come to what many databases do such as Postgres. This is to outsource durability to the kernel with buffered I/O. You write from the database’s memory to the kernel’s page cache, and then you allow the kernel to flush or sync the page cache to disk. Whenever the database wants to write to disk, it issues a write system call to the kernel, passing the buffer to be written, where on disk it should be written to.
The interesting thing is that this write system call does nothing, at least nothing durable at this point. That’s because when the kernel receives the write system call from the database, the kernel simply copies the buffer across to some pages in the page cache. Marks the pages as dirty. In other words, they’re en route to disk, but they’re not there yet. Finally, because there’s no real disk hardware involved, these write system calls don’t typically fail. The kernel usually returns success almost immediately back to the database. Then at some point, the kernel is going to start writing the data out to disk. As each page is written, that’s marked clean. If the database does nothing more than issue writes to the kernel in this way, then it can’t know when these writes are safely on disk. That’s not a problem so far because if the machine were to crash right now, then the data would be lost. It wouldn’t matter, because the database would not yet have acknowledged the transaction to the user.
Fsync
When the database does want to commit the transaction, when it wants all the dirty pages relating to the transaction to be flushed to disk, again, for durability, then it issues another system call. This time fsync. Fsync tells the kernel to finish all writes to disk before returning. This ensures that all the data can be retrieved, even if the system crashes or restarts. In other words, where a database does make the big design decision to outsource durability to the kernel, then fsync is crucial. This is because most database update protocols such as write ahead logging or copy on write designs, they rely on forcing data to disk in the right order for correctness. However, fsync is not easy to get right because under the buffered I/O design, fsync is where the rubber hits the road, and the data actually hits the disk. Because disks are physical, they can fail in all kinds of ways, either permanently or temporarily. Some sectors might fail, others might not. If you’re lucky, the disk will tell you when an I/O fails. If you’re unlucky, it won’t. We’ll come back to this.
The takeaway for now is that since the kernel may know that some of the buffered writes may have hit I/O errors on their way to the physical disk, when the database does eventually call fsync, then it might receive an error back from the kernel indicating that some of the writes in the batch didn’t make it. If fsync does return an I/O error, there are three choices that the database can make. Option 1 is just ignore any error from fsync, pretend that the writes didn’t fail, which is what some databases used to do in the past. Option 2, retry the fsync in the hope that the kernel will retry all the buffered writes to disk and keep retrying until you’re durable, and until you don’t get an error back from fsync. Option 3 is just crash the database, and then you restart and you recover from a checkpoint.
The Cracks of Buffered I/O
If you were in MySQL, or Postgres issues, what would you choose? How confident would you be that your choice guarantees durability? If you’ve heard the story before, and you know the answer, then I promise, there’s a twist in the tale. There’s something new which I think you will find surprising. Before we get to the answer, I want to warm up with a look at some of the cracks in buffered I/O. None of these cracks on their own are enough to end an era of database design or begin another. I think they point to something shaky in the foundations. First, writing to cache instead of disk means that you lose congestion control over the disk with no mechanism for backpressure. This can result in significant latency spikes, when the system has to write out gigabytes of dirty pages. Second, it’s difficult to prioritize foreground and background I/O. You can’t schedule your fsyncs apart from other application data in the page cache. Everything is just sharing this one page cache. There are ways around this like sync_file_range, but the man page for that has this friendly warning, “This system call is extremely dangerous and should not be used in portable programs.” What does that mean?
Third, buffered I/O is all or nothing. You can’t handle errors related to specific writes. If something goes wrong, you don’t know what it was or where. Finally, disks have now become so fast on the order of 3 gigabytes a second, they’re starting to approach per core memory bandwidth on the order of 6 to 20 gigabytes per second, maybe if you’ve got an M1. This means that if you’re still using buffered I/O, and you’re doing memcopies to the kernel page cache for every write, then you’re not only thrashing your L1 through L3 CPU caches, you’re using up CPU to do the copies, but you’re also potentially halving memory bandwidth. This is assuming that the copy to the page cache is the only copy in your data plane. If you need a second copy, maybe for networking with deserialization, then that can be almost all your memory bandwidth gone. When it comes to buffered I/O, there’s a crack in everything. That’s how the light gets in.
Fsync Returns EIO
Let’s return to our foundational fsync question. Again, the question is, if fsync fails with an I/O error, what should the database do? How do you handle fsync failure? Do you ignore the error and pretend that your buffered writes are durable? Do you keep retrying the fsync in the hope that the kernel will retry all the buffered writes that failed? Do you crash and restart? Do you recover from a checkpoint? Of course, ignoring the error is not correct. It’s not an option. What about retrying? Indeed, for many years, most databases would retry, for example, Postgres would keep retrying a failed fsync until it succeeded under the assumption that the kernel would retry any failed buffered writes, at least this was the assumption for 20 years. Then something happened. Five years ago, Craig Ringer, posted this post on the Postgres mailing list to report that he had run into real data loss with Postgres. The critical database guarantee of durability, the D in ACID had been violated. What was more stunning, I think, and perhaps the reason that the incident became known as Fsyncgate is that this wasn’t due to a bug in Postgres per se. Postgres had followed Linus’s advice, and relied on the design of the page cache. Even though Postgres was careful to retry after an I/O error, it was still losing data. This was because in the kernel, when a buffered write failed due to disk error, the kernel was in fact simply marking the relevant pages as clean, even though the dirty pages had not been written properly to disk. This means that Postgres might get the first fsync error the first time around, assuming that another process didn’t consume it first. Then the second time around when Postgres retries the fsync, the fsync would succeed, and Postgres would proceed as if the data was committed to disk, despite the relevant pages still not having been made durable. Again, the dirty pages would simply be marked clean in the kernel, and the kernel developers maintained that this mark clean behavior was necessary, for example, to avoid out of memory situations, if a USB stick was pulled out, so if the page cache wouldn’t fill up with dirty pages that could never be flushed. It rocked the database world.
After the discovery of Fsyncgate, Postgres, MySQL, other affected databases, they decided to fix the issue by just changing their answer to the foundational fsync question. Instead of attempting to retry fsync after failures before, they would crash, and then recover from the data file on disk. Jonathan Corbet wrote about Postgres’s fsync surprise. Then a year later, in 2019, with a fix in place, Tomas Vondra gave a fascinating talk about this, called, “How is it that Postgres used fsync incorrectly for 20 years, and what we’ll do about it.” Up to now, if I’ve told the story properly, then I think you’ll agree that this was the correct fix for Fsyncgate. This is where the story ends. Perhaps you’re still wondering, where are all the seismic shifts in database design that you promised at the beginning? Because it looks like the fix for Fsyncgate was not a major design change, not even a minor design change, but just a simple panic, to restart and recover. It’s a clever fix, for sure, but a simple fix.
Can Applications Recover from Fsync Failures?
Here we come to the part of the story that I find is less often told. The story picks up two years later, in 2020, when University of Wisconsin-Madison asked the question, can applications recover from fsync failures? You can probably guess what the answer is when some of the leading storage researchers, Remzi, and Andrea Arpaci-Dusseau, and his students, when they decide to ask the question in this way. If what the first fsyncgate had found was stunning, then I think that what this paper found was even more so. Because while databases such as SQLite, and Postgres would now crash after an fsync EIO error, after restarting their recovery, their recovery would still read from the page cache, not the actual on-disk state. They’re potentially making recovery decisions based on non-durable pages that were marked cleaned through the fsync failure. The paper also raised other ways that fsync failure was not being handled correctly by these databases. For example, where EXT4 in data mode would suppress an EIO error, and only return it to the next fsync call. You want to get an fsync error, you have to call fsync twice. Users were still at risk of data loss and corruption.
The most important finding, at least for me, was just this little sentence at the end of the abstract. Sometimes in a paper, you read it carefully, and there’s these little things that you almost gloss over and you read again, and there’s the sentence that just changes your understanding of things. For me, this sentence at the end of the abstract was one of those. Our findings have strong implications for the design of file systems and applications, that is, databases that intend to provide strong durability guarantees, strong implementation implications for the design of databases. UW-Madison had shown that the classic buffered I/O design for databases of the past 30 years was now fundamentally broken. There was in fact no correct answer to the question of how to handle fsync failure under the buffered I/O design. The answers were all wrong, and not actually sufficient for a database to guarantee durability. Instead, database design would need to move from buffered I/O to direct I/O. Databases would need to take direct responsibility for durability. They would need to be able to read and write to the disk directly, to be sure that they always made decisions and acknowledgments of durable data, instead of basing decisions only on the contents of the kernel page cache.
I think the implications of this design change are enormous. It’s one thing to design a database from scratch, like we did with TigerBeetle to take advantage of direct I/O. It’s something else entirely to try and retrofit an existing design for direct I/O. The reason is, you need to align all your memory allocations and I/O operations to advanced format: 4 kilobyte sector size, you need a new buffer pool, a new user space page cache. Because the latencies of your writes to disk are now realistic, that is, disk speed rather than memory speed, you can’t afford to block asynchronous I/O anymore. You could have taken those shortcuts in your design. Now you actually need to implement proper asynchronous I/O in your write path. If you can do this, there’s some incredible performance gains. It’s a complete overhaul of your whole database design. Or as Andres Freund on the Postgres team said back in 2018, when this happened, efficient DIO usage is a metric ton of work.
Andres has since done a phenomenal amount of work around direct I/O for Postgres. Andres shared this with me that Thomas Munro is planning to merge some actual DIO support soon. For all these reasons, for all this history, I believe that this event in 28th, March 2018, drew the first line in the sand. This is what marked the end of an era for database design. This was the first line I think, and it almost passed us by. The design of new databases that intend to provide strong durability guarantees would have to change. Not only because of Fsyncgate, so something else was about to happen in 2018. Researchers at UW-Madison were again about to discover something just as momentous, this time not in the buffered I/O design, but in the write ahead log design of almost every database you know.
Crash Consistency Through Power Loss
We started out by saying that, for a database to guarantee durability, it must protect committed transactions through power loss. One aspect of this is the write path, which we’ve looked at, to compare buffered I/O with direct I/O in how a database writes transactions to disk and then reads them back in at recovery. How does a database actually recover these transactions after a crash? The idea is pretty simple and common to most databases. It’s called the write ahead log. I first learned the technique when I was diving into Redis, a decade ago. Redis has what is called the AOF, which stands for append only file. This is a log on disk. When Redis wants to commit a transaction, it first appends the transaction to the end of this log, and then it calls fsync. For example, if you want to execute a transaction in Redis to set the QCon key to London, then Redis would append this to the log.
I’ve simplified it a little here, but this is an elegant format. First, you’ve got the number of arguments, in this case, 3. Then you’ve got the number of bytes in each argument, so 3 bytes for the command, which is SET, 4 bytes for QCon. Then finally, you’ve got 6 bytes for London. If we then want to update QCon to New York, we would append another transaction to the log like this. The trick is that Redis never updates anything in place in the log, it always appends so that data is never overwritten. This means that if the power goes, then existing data is not destroyed. There might be some garbage that at the end of the log if a partial transaction was being appended, and then the power went, for example, if we tried to set QCon to San Francisco, then the power goes, might end up with this like partial transaction. At startup, Redis can figure this out and discard the partial transaction. This is safe to do because Redis would not yet have acknowledged the San Francisco transaction till it was durably on the log.
This write ahead log design is the foundation for most databases. It’s the crucial building block for ensuring atomic changes to database state. It’s also the foundation for distributed databases. This is where each node in the cluster will keep its own write ahead log, which is then appended to by the global consensus protocols such as viewstamp replication, Raft or Paxos. However, for distributed databases, the write ahead log is doubly critical. Because if anything goes wrong, if you have a bug that undermines durability, so that you lose a transaction that you’ve acknowledged to the rest of the cluster, then you’ve not only lost a copy of user data, which you have. You’ve also undermined the quorum voting that’s going on in the consensus protocol. You mess with the quorum votes. This can easily lead to like split brain, which can in turn cause global cluster data loss. The write ahead log is foundational to the design of a database with a single node or distributed. There are many variants on this design. For example, I’ve shown you a simplified version of Redis as text format, to give you the big idea, but most databases use a binary format. Then they prefix each transaction in the log with a transaction header. There’s also a checksum. If at startup, the database sees that the checksum doesn’t match, then the database truncates the log from that point to discard the partial transaction. However, it’s critical that only the last partial transaction, a transaction that was being appended as the power went out, is truncated. The database must never truncate any other transactions in the write ahead log, because obviously, these would have been acknowledged to the user as committed. To truncate them would violate durability. At a high level, this is how most write ahead log designs all work today.
If we apply what we’ve learned from Fsyncgate, how does this design interact with real storage hardware? Can you spot the problem? We’ve seen that QCon London is already committed as a transaction through a write ahead log. What if while the machine is off, the disk sector containing our QCon London transaction in the write ahead log, what if that disk sector is corrupted, and experiences just a single flip bit, so if the length prefix for London changes from 6 to 4. When the database starts up and reads the log, the QCon key now is going to look like it was set to Lond. Then the New York transaction after that, which was committed, now that’s going to look like it was being appended when the power went out, because the log is now no longer going to have the proper format. It’s going to have garbage data at the end from O and N onwards. As far as I’m aware, in this situation, most databases would truncate the log before New York, and the committed New York transaction as well as every transaction after it would be lost, violating durability. For databases that checksum the transaction in the write ahead log designs, the London key would also be truncated because of the checksum mismatch, even though it was in fact committed. In other words, a single disk sector failure is enough to break the write ahead log design of most databases. They incorrectly truncate the committed log, potentially truncating tens to hundreds of committed transactions. You can see the big problem, they’re all trying to solve the problem of power loss and torn writes after a crash. They’re being fooled because now you get a little bit of bit rot in the middle of the committed log, and they conflate that with a system crash, and just rewind everything, which is not correct.
As far as I know, that’s every database out there with a write ahead log. If yours handles this, please let me know. With these designs, users will experience silent data loss. Whereas for a single node database, the correct behavior would be to fail fast, then you notify the user of the corruption, because at least then they’ve got a chance to restore from backup. For distributed databases, it’s worse. This single disk sector fault would also have the potential to do more damage. Again, like we said, it can mess with the quorum votes, cause split brain, and that can cascade into global cluster data loss. There, you’re not just losing one piece of user data in the transactions, you’re actually corrupting everything. The whole data file of every replica just gets messed up. Split brain cascades into cluster data loss. At least this was the finding of the research team that discovered this, as they analyzed the write ahead log designs of distributed systems using consensus protocols such as Raft. They wanted to see, if they can just put in one single disk sector fault on one node on a single machine, what could that do to the cluster? Basically, it could do the worst.
The team was again, Remzi, Andrea Arpaci-Dusseau, students at UW-Madison as well. This time, the students were Ramnatthan Alagappan, Aishwarya Ganesan, who won best paper at FAST for their research on this, which was called Protocol-Aware Recovery for Consensus-Based Storage. It was in 2018. This was in fact, a month before Fsyncgate. It totally passed us by. We heard about Fsyncgate. I don’t think many of us have heard about this. Like Fsyncgate, again, it’s something we still haven’t dealt with fully as an industry, I think. Some cloud providers I know, they’ve patched their proprietary databases for these findings. I’m not aware of other open source databases that handle this correctly. In the case of Fsyncgate, it took users to experience data loss, then they had to be dedicated enough to report the data loss, and to figure out what had happened. This was not just disk corruption, but a design flaw. The database could have handled Fsyncgate safely, but it didn’t, instead it accelerated the storage fault into unnecessary data loss. In the case of the write ahead log design flaw protocol-aware recovery, here, the research community, they’ve done the favor. They’ve discovered this proactively, but it’s yet to trickle down to inform new design. How long until the write ahead log powering a major Kubernetes deployment is truncated in a way that leads to split brain, loss of the control plane and public outage? I think this is the paper you need to read three times to let the impact sink in.
How Do you Distinguish between Power Loss and Bit Rot in the WAL?
How do you fix a design flaw like this? How do you distinguish between power loss and bit rot, to know whether to truncate it to unwrite the power loss, or to raise an error, or do distributed recovery in the case of bit rot. The strategy given by the paper, what we do in TigerBeetle is to supplement the write ahead log with the second write ahead log. We’ve actually got two write ahead logs in TigerBeetle. We love them so much. The second write ahead log, it’s very small. It’s lightweight. It just contains a copy of the headers from the first. First write ahead log has got your messages and your headers, or your transactions and transaction headers. The second write ahead log just has the transaction headers. This turns the write ahead log append process into a two-step process. First, you append to the log as you normally would. Then after you append the transaction, you also append the small transaction header to a second log, so you got another copy of it. This enables you to distinguish between corruption in the middle of the committed log caused by bit rot from corruption at the end of the log caused by power loss. Again, like the fix for Fsyncgate, it’s a major design change for a database to have two write ahead logs, not just one. It’s not always easy to retrofit.
There were also other findings in the protocol-aware recovery paper from UW-Madison, they impact how we design distributed databases, showing that for protocols like Raft, if the output is lucky, and a single disk sector fault doesn’t lead to global cluster data loss, then the fault can still cause the cluster to become prematurely unavailable, because of how storage faults are handled in the write ahead log. You’re paying for this durability, but you’re not getting the availability. Fixing this is also going to require design changes. For example, in the past, global consensus protocol, local storage engine would be separate modules or components. We used to think of these as completely decoupled or not integrated. If you want your distributed database to maximize availability, how your local storage engine recovers from storage faults in the write ahead log needs to be properly integrated with the global consensus protocol. If you want to optimize for high availability, you want to tolerate up to the theoretical limit of faults that your consensus protocol should be able to tolerate, then it’s no longer enough just to cobble together off-the-shelf Raft with off-the-shelf LSM-tree like Rocks or LevelDB. Instead, you need to take both of these, according to the paper, and they show you how to make them storage fault aware. How to talk to each other, so they can both recover. It sounds complicated, but it’s really simple. It’s an obvious idea. Use your replicated redundancy to heal your local write ahead log. This is like a design change. It’s fundamental enough, I think, to signal a new era for how we design distributed databases. You have to use a consensus protocol that implements this, and the storage engine that implements it. Both of them must talk to each other with these new interfaces. It’s a big design change.
Summary: Fsyncgate and Protocol-Aware Recovery
To summarize what both Fsyncgate and protocol-aware recovery from UW-Madison have in common, is that I think storage faults force us to reconsider how we design our databases to write to disk, or even just to append to write ahead log and how they recover at startup. If we can extract a principle from this, then the principle is that if we want to guarantee durability, we need to move beyond a crash safety model. This is the model that we’ve had up until now, let’s just survive durability through power loss. We need to move beyond that. This idea that the plug can be pulled any second. Sure, we need to support that, but we need more. We need to actually adopt an explicit storage fault model, so that we can test our designs against the full range of storage faults that we expect, not only crash safety, but storage fault safety. In the security world, this would be the equivalent of a threat model. Just as it can make for better security design to be upfront about your threat model, so I think it can make for better durability or just durability, if we expect the disk to be not perfectly pristine. This is formal proofs for Paxos or Raft. This is what they assume, perfect storage. Rather, what I think we need to do is think of disks as near-Byzantine. That’s what we call it at TigerBeeetle. It’s our own made-up term. It’s somewhere between non-Byzantine fault tolerance and Byzantine fault tolerance. Disks are somewhere in between, they’re near-Byzantine. You do well if you expect the disk to be almost an active adversary.
For TigerBeetle, we saw both these events of 2018 as an opportunity. We could take the growing body of storage fault research. There’s so much out there. All the other papers coming also from UW-Madison, the Vanguard. We could take that, design TigerBeetle to not only be crash tolerant, but also storage fault tolerant, to start to see disks as distributed systems. One disk, there’s so much stuff going on there in terms of failures. It’s like the network fault model, like network partitions, you can get cut off from disk sectors. You start to see just a single disk as a distributed system. It’s a whole faulty microcosm where you can have bugs in the disk firmware, in the device drivers, even in the file systems. You can have latent sector errors, at least where you get an explicit EIO from the kernel as we’ve seen. Then you can also get silent corruption where you don’t. This could be caused by bit rot, even lost or misdirected reads or writes. This is just where the disk sends the I/O to the wrong sector for some reason.
For example, here’s how we designed TigerBeetle to read our write ahead log at startup, we’ve got these two write ahead logs, even in the presence of storage faults. What we really liked about this was we enumerated all the possible faults we expected according to the fault model, while we’re recovering from both write ahead logs. You’ve got the transaction log, the header log. Then we enumerated all these combinations in a matrix. My co-founder and I, DJ and myself, we worked these out together, the countless cores, worked them out by hand. Then we generated all the failure handling branches in the code dynamically. This way, we can ensure that we wouldn’t miss a fault combination, we just enumerated them all, worked out what the proper response should be. Then we generated it in the code so that all the branches are there. Of course, these storage faults are rare compared to all machine failures. In large scale deployments, even rare failures become prevalent.
Taking Advantage of Direct I/O From a Performance Perspective
At the same time, while we had the opportunity to focus on safety, to design for an explicit storage fault model, and for direct I/O from scratch, we also took the opportunity to ask, how can we take advantage of direct I/O from a performance perspective? In the past, when working with direct I/O, you would use asynchronous I/O, like we’ve said, so that your system calls don’t block your main process. However, historically on Linux, because the I/O API for async I/O wasn’t great, you’d have to implement the solution of asynchronous I/O by means of a user space thread pool. Think of libuv. Your database control plane would submit I/O to a user space thread pool, which would then use a blocking syscall to the kernel to do the I/O. Problem with this is that your control plane context switched to your thread pool, then your thread pool must pay the cost of the blocking syscall to the kernel. Then you must context switch back to your control plane. In July 2020, as we’re designing TigerBeetle, we’re asking the performance questions, what if we could have first class asynchronous I/O but without the complexity of the user space thread pool? With the cost of a context switch in the range of a microsecond, context switches approaching the same order as just doing the I/O itself. What if you also didn’t need a context switch? We love to ask these questions. Let’s just get rid of the thread pool, just get rid of the context switch. Even better with the mitigations after Spectre and Meltdown, this was making syscalls slower. We asked, what if you just didn’t need to syscall into the kernel?
We wanted to embrace direct I/O almost to the point of bypassing the kernel completely. We’re already bypassing the page cache and now we’re bypassing the context switches and the user space thread pool and the syscalls. Except, obviously, we didn’t want the complexity of bypassing the kernel completely. For example, we didn’t want to use kernel bypass techniques like SPDK, or DPDK, because, firstly, we’re just not that good to do that. That’s what Redpanda do, which is amazing. One of my favorite databases. That’s high performance but also more complex, I think. Finally, we wanted a unified asynchronous I/O interface with a simple API for networking and storage. Here, again, we’re lucky with the timing, because just as we were thinking about these things, Jens Axboe comes along with his new io_uring API for Linux. This is arriving on the scene and starting to make waves. The reason is that, like Jens just singlehandedly fixed Linux’s asynchronous I/O problem. He landed a series of magnificent patches to the kernel. He actually almost started the end of 2018, just touching AIO a little bit. Then, Jens, there comes the first io_uring patch, early 2019. Here he gives user space a ring buffer, to submit I/O to the kernel without blocking. Then he gives the kernel another ring buffer to send I/O completion callbacks back to user space without blocking. No syscalls could be amortized. No need for user space thread pool. You can now have a single threaded event loop control plane. Then you can use the kernel’s own thread pool as your data plane. No more thread pool. You get to use the kernel’s own thread pool as your data plane. It’s much more efficient. Also significantly simpler, which I really love.
If you’ve enjoyed Martin Thompson’s brilliant talks at QCon, over the years, then you’ll recognize the design of io_uring has what Martin calls mechanical sympathy. If you don’t know this word, I know you didn’t watch Martin’s talks, but please go watch them. They’re fantastic. They’ve made a huge impact on TigerBeetle’s design. I think again, io_uring is a beautiful design. What I love most of all about io_uring is how it not only gives the perfect interface for file and disk I/O, but also for network I/O. Historically, storage and network were different APIs. Now, with io_uring, you’ve got a unified interface for both. We really couldn’t ask for a better I/O API. In fact, it’s so good, I think io_uring alone is a perfect excuse to redesign the way that our databases do I/O in 2023.
New Databases for the Future
Given these major design changes already, at what point do we start to consider the next 30 years of databases? If we’re going to write new databases for the future, we’ve made that decision, we’re going to do this, if we’re going to write them with new designs, what languages are we going to write them in? Are we going to use the systems languages of the last 30 years, C, C++? Are we going to use the systems languages of the next 30 years? These questions were on our mind when we were deciding whether to write TigerBeetle in C, or else in a new systems language called Zig that we’ve been following for two years by this point. Barbara Liskov said that if you want to teach programmers new ideas, you need to give them new languages to think those ideas in. What we came to appreciate with Zig, is that it fixed all the issues we had with C, also made it easier to write correct code. It also resonated with our thinking on how you work with memory efficiently as you do systems programming. Memory is probably the most important aspect of systems programming. How good are you working with memory? How sharp are your tools to do that?
For example, it was refreshing how easily we could implement direct I/O in Zig, where all the memory that you pass to the disk is sector aligned, where you can enforce this in a type system. Even the allocators, you don’t need special calls anymore just to do an aligned allocation. With Zig there’s also no second-class macro language. Instead, you simply program in Zig at compile time as your binary is being compiled. Your meta language is also Zig. We considered Rust, but io_uring and the ability to use the kernel thread pool without context switches meant we had less need or desire for fearless multi-threading in user space. We didn’t actually want to do that. We were trying to get away from that. The borrow checker would still have been useful for a single threaded event loop, for logical concurrency bugs, but we didn’t want to do multi-threading in the first place. We also wanted TigerBeetle to follow NASA’s power of 10 rules for safety critical code. You have to handle memory allocation failure for your database. We also wanted to enforce explicit limits on all resources with general designs. Even for loops, there’s no while True, every loop must terminate. There’s an expected balance on how long that loop can possibly run for. Another big example of this is just TigerBeetle’s use of static memory allocation. This means that all the memory that TigerBeetle needs is calculated and allocated at startup. After that, there’s no dynamic allocation at runtime. This is almost a lost art, but we wanted to bring it back to enable TigerBeetle to decouple database performance from memory allocation for extreme memory efficiency, and predictable operating experience. This means that after you start TigerBeetle again, there’s no more malloc or free, no risk of fragmentation. It’s just pure predictable performance. We’ve made resource usage explicit like this throughout the design, with memory, with other resources as well.
It’s like cgroups in your database. This is striking, I think, because compared to designs where the limits have not been thought through, or made explicit. We really enjoy this model of building the database. Because TigerBeetle makes everything explicit from storage fault model to resource usage, this means that we can test everything and then actually, like test it to the limit. We’ve got all the limits, so we can now test them. For example, to simulate disk corruption on the read or write path, in the double-digit percentages on every node in the cluster, even the primary, we inject faults up to the theoretical limit, and then we check that TigerBeetle doesn’t truncate committed transactions. Is our write ahead log design working. Can the write ahead log use replicated redundancy to heal itself? Can we preserve durability? Can we remain available?
What’s powerful about this testing also is that everyone on our team can run this simulator on their local laptop, and every bug that the simulator finds can be reproduced deterministically, replayed again and again for incredible developer velocity. As you’re building a new feature, you just run the simulator. This is the final major design decision in TigerBeetle because the whole database has been designed from the ground up as a deterministic distributed database. This means that all the abstractions are deterministic. For example, again, the control plane is single threaded. There’s no nondeterminism from the operating system’s thread scheduler that could be a source of randomness. Even the source of time, the clock in each node in the database is deterministic. It’s an abstraction that you can tick. It’s got a tick method, just like you would tick the second hand of a clock. Then we can shim the source of time, or we can shim the disk, or the message bus, we can give a network to the message bus that’s actually backed by a packet simulator. Then we can run a whole cluster of TigerBeetle replicas in a single process. This is the second order effect that if we want, because we control the whole simulation, we can literally speed up time. Instead of ticking time every 10 milliseconds as we would normally do, if we want 10 millisecond granularity in our timeouts, instead of doing that, we can tick time in a tight while True loop so that every iteration of the while True loop is just a hot loop. We’ve simulated 10 milliseconds of real-world time in a fraction of the time.
We actually worked out the time dilation numbers for this last week, and an average run of the TigerBeetle simulator takes just 3.3 seconds to run a pretty interesting simulation, with 64 committed operations, all kinds of stuff, just 3.3 seconds. That executes 235,000 clock ticks, each of those represent 10 milliseconds of time in the real world. In other words, 39 minutes in total. You can run the simulator for 3.3 seconds on your laptop, and you’ve achieved the equivalent of 39 minutes of simulated test time, full of all kinds of network latencies, packet drops, partitions, crashes, disk corruptions, disk slowdowns, every possible fault. If you want to debug something that would normally take 39 minutes to manifest, you can now do that in just 3.3 seconds. It’s a speedup factor of 712 times. With existing test harnesses, like Jepsen, they’re fantastic, but they’re not deterministic. If you find a bug, you might not find it again. Also, they run in real time, so if you want 39 minutes of test time, you have to give up 39 minutes of real-world time on your local laptop. You don’t get the same developer velocity. Being able to speed up time like this feels like magic, like a silver bullet. What I’m most excited about is that because everything in TigerBeetle is abstracted, even the state machine logic, you can just take this accounting state machine out, put another state machine in, and you’ve got a whole new distributed database, but you benefit from all the fault tolerance testing of TigerBeetle.
For example, we’ve done this internally to create our own control plane database in a week, before might have taken years. That’s why I believe we’re really in a new era. The rate at which new databases can be created is going to just accelerate and they’re going to operate and be tested at much tighter tolerances than anything we’ve seen before. Each of these advances, direct I/O, protocol-aware recovery for consensus-based storage, explicit storage fault model, io_uring, an efficient replacement for C in Zig, deterministic simulation testing. Each of these on their own, I think makes for a whole new dimension in database design, hard to retrofit. Taking it together, these advances in database design are going to unlock an abundance of new, correct, and high-performance open source database management systems, tailored to the domain. It’s a new era for database design, and I think it’s a good day to do this.
See more presentations with transcripts
Global NoSQL Database Market Size and Forecast | Objectivity Inc, Neo Technology Inc …

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts

New Jersey, United States – Our report on the Global NoSQL Database market provides a comprehensive overview of the industry, with detailed information on the current market trends, market size, and forecasts. It contains market-leading insight into the key drivers of the segment, and provides an in-depth examination of the most important factors influencing the performance of major companies in the space, including market entry and exit strategies, key acquisitions and divestitures, technological advancements, and regulatory changes.
Furthermore, the NoSQL Database market report provides a thorough analysis of the competitive landscape, including detailed company profiles and market share analysis. It also covers the regional and segment-specific growth prospects, comprehensive information on the latest product and service launches, extensive and insightful insights into the current and future market trends, and much more. Thanks to our reliable and comprehensive research, companies can make informed decisions about the best investments to maximize the growth potential of their portfolios in the coming years.
Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.verifiedmarketresearch.com/download-sample/?rid=129411
Key Players Mentioned in the Global NoSQL Database Market Research Report:
In this section of the report, the Global NoSQL Database Market focuses on the major players that are operating in the market and the competitive landscape present in the market. The Global NoSQL Database report includes a list of initiatives taken by the companies in the past years along with the ones, which are likely to happen in the coming years. Analysts have also made a note of their expansion plans for the near future, financial analysis of these companies, and their research and development activities. This research report includes a complete dashboard view of the Global NoSQL Database market, which helps the readers to view in-depth knowledge about the report.
Objectivity Inc, Neo Technology Inc, MongoDB Inc, MarkLogic Corporation, Google LLC, Couchbase Inc, Microsoft Corporation, DataStax Inc, Amazon Web Services Inc & Aerospike Inc.
Global NoSQL Database Market Segmentation:
NoSQL Database Market, By Type
• Graph Database
• Column Based Store
• Document Database
• Key-Value Store
NoSQL Database Market, By Application
• Web Apps
• Data Analytics
• Mobile Apps
• Metadata Store
• Cache Memory
• Others
NoSQL Database Market, By Industry Vertical
• Retail
• Gaming
• IT
• Others
For a better understanding of the market, analysts have segmented the Global NoSQL Database market based on application, type, and region. Each segment provides a clear picture of the aspects that are likely to drive it and the ones expected to restrain it. The segment-wise explanation allows the reader to get access to particular updates about the Global NoSQL Database market. Evolving environmental concerns, changing political scenarios, and differing approaches by the government towards regulatory reforms have also been mentioned in the Global NoSQL Database research report.
In this chapter of the Global NoSQL Database Market report, the researchers have explored the various regions that are expected to witness fruitful developments and make serious contributions to the market’s burgeoning growth. Along with general statistical information, the Global NoSQL Database Market report has provided data of each region with respect to its revenue, productions, and presence of major manufacturers. The major regions which are covered in the Global NoSQL Database Market report includes North America, Europe, Central and South America, Asia Pacific, South Asia, the Middle East and Africa, GCC countries, and others.
Inquire for a Discount on this Premium Report @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=129411
What to Expect in Our Report?
(1) A complete section of the Global NoSQL Database market report is dedicated for market dynamics, which include influence factors, market drivers, challenges, opportunities, and trends.
(2) Another broad section of the research study is reserved for regional analysis of the Global NoSQL Database market where important regions and countries are assessed for their growth potential, consumption, market share, and other vital factors indicating their market growth.
(3) Players can use the competitive analysis provided in the report to build new strategies or fine-tune their existing ones to rise above market challenges and increase their share of the Global NoSQL Database market.
(4) The report also discusses competitive situation and trends and sheds light on company expansions and merger and acquisition taking place in the Global NoSQL Database market. Moreover, it brings to light the market concentration rate and market shares of top three and five players.
(5) Readers are provided with findings and conclusion of the research study provided in the Global NoSQL Database Market report.
Key Questions Answered in the Report:
(1) What are the growth opportunities for the new entrants in the Global NoSQL Database industry?
(2) Who are the leading players functioning in the Global NoSQL Database marketplace?
(3) What are the key strategies participants are likely to adopt to increase their share in the Global NoSQL Database industry?
(4) What is the competitive situation in the Global NoSQL Database market?
(5) What are the emerging trends that may influence the Global NoSQL Database market growth?
(6) Which product type segment will exhibit high CAGR in future?
(7) Which application segment will grab a handsome share in the Global NoSQL Database industry?
(8) Which region is lucrative for the manufacturers?
For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/nosql-database-market/
About Us: Verified Market Research®
Verified Market Research® is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.
Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.
We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.
Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the world’s leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.
Contact us:
Mr. Edwyne Fernandes
Verified Market Research®
US: +1 (650)-781-4080
UK: +44 (753)-715-0008
APAC: +61 (488)-85-9400
US Toll-Free: +1 (800)-782-1768
Email: sales@verifiedmarketresearch.com
Website:- https://www.verifiedmarketresearch.com/

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. is a developer data platform company. Its developer data platform is an integrated set of databases and related services that allow development teams to address the growing variety of modern application requirements. Its core offerings are MongoDB Atlas and MongoDB Enterprise Advanced. MongoDB Atlas is its managed multi-cloud database-as-a-service offering that includes an integrated set of database and related services. MongoDB Atlas provides customers with a managed offering that includes automated provisioning and healing, comprehensive system monitoring, managed backup and restore, default security and other features. MongoDB Enterprise Advanced is its self-managed commercial offering for enterprise customers that can run in the cloud, on-premises or in a hybrid environment. It provides professional services to its customers, including consulting and training. It has over 40,800 customers spanning a range of industries in more than 100 countries around the world.
Article originally posted on mongodb google news. Visit mongodb google news
Decrease in Achmea Investment Management B.V.’s Stake Raises Concerns about … – Best Stocks

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
As of the first quarter of this year, Achmea Investment Management B.V., a prominent institutional investor, has significantly reduced its stake in MongoDB, Inc. (NASDAQ: MDB). According to a Form 13F filing with the Securities and Exchange Commission, Achmea Investment Management B.V. now holds a mere 137 shares of the company’s stock after selling 4,328 shares during the quarter. This move marks a substantial decrease in their holdings and suggests a potential change in their investment strategy.
The value of Achmea Investment Management B.V.’s MongoDB holdings amounted to $32,000 at the end of the most recent reporting period. This reduction indicates that the institutional investor may have found more attractive investment opportunities or decided to reallocate its resources elsewhere. The decision to trim its position in MongoDB raises questions about the future prospects of the company and its ability to generate substantial returns for its shareholders.
In order to gain further insight into MongoDB’s current financial performance, it is essential to examine the company’s recent quarterly earnings report. On August 3, 2023, MongoDB released its quarterly earnings results for Q2 of fiscal year 2023. According to these results, the company reported earnings per share (EPS) of $0.56 for the quarter.
This figure exceeded analysts’ consensus estimates by an astounding $0.38 per share. The positive deviation from expectations demonstrates MongoDB’s ability to outperform market projections and suggests that they may possess a competitive advantage in their industry.
However, it is worth noting that despite surpassing expectations in terms of EPS, MongoDB reported a negative net margin of 23.58% and a negative return on equity (ROE) of 43.25%. These concerning figures indicate that although their revenue has increased substantially, profitability remains an area that requires improvement.
Quarterly revenue stood at $368.28 million for Q2 2023, surpassing the consensus estimate of $347.77 million. This represents a solid 29.0% increase in revenue compared to the same period last year, highlighting MongoDB’s ability to grow its top-line.
Sell-side analysts are forecasting that MongoDB, Inc. will post -2.8 earnings per share for the current fiscal year. This projection suggests that the company may face difficulties in generating positive earnings and achieving profitability over the near term.
As investors and market participants evaluate these financial results, it is crucial to consider the broader implications for MongoDB’s future outlook and shareholder value. With Achmea Investment Management B.V.’s decision to trim its position significantly, other investors may follow suit or reassess their own holdings in MongoDB.
Investors should keep an eye on MongoDB’s ongoing efforts to improve profitability and address concerns related to negative net margin and ROE. The success of any future initiatives undertaken by management will likely play a critical role in dictating the company’s prospects for long-term sustainability and growth.
In conclusion, while Achmea Investment Management B.V.’s reduction in their stake raises doubts about MongoDB’s future trajectory, the company’s ability to exceed EPS expectations demonstrates its potential for continued success. As investors await further updates from MongoDB regarding its strategic initiatives, it remains essential to closely monitor developments within this dynamic market segment and exercise prudence when making investment decisions pertaining to the company.
MongoDB’s Institutional Interest and Insider Transactions Drive Stock Surge
MongoDB’s Diverse Investor Base Fuels Stock Surge Amidst Notable Insider Transactions
Date: August 3, 2023
MongoDB, Inc. (MDB), a leading technology company specializing in modern database platforms, has gained significant attention from institutional investors and hedge funds in recent months. This surge in interest has coincided with notable insider transactions and positive analyst ratings, propelling the stock to new heights.
Numerous hedge funds and institutional investors have recently entered and exited positions in MDB. For instance, Bessemer Group Inc. acquired a stake valued at $29,000 during the fourth quarter of the previous year. Similarly, BI Asset Management Fondsmaeglerselskab A S purchased shares valued at approximately $30,000 during the same period.
Lindbrook Capital LLC increased its holdings by an impressive 350% during the fourth quarter. Currently owning 171 shares valued at $34,000 post-purchase of an additional 133 shares. Another notable investor is Y.D. More Investments Ltd., who acquired a position in MDB worth roughly $36,000 during the same quarter.
Meanwhile, CI Investments Inc., demonstrating its confidence in MDB’s potential growth prospects, grew its stake by an impressive 126.8% during the fourth quarter. Following this increase, CI Investments Inc. now owns 186 shares of MDB stock valued at approximately $37,000.
Impressively, these institutional investors collectively own a staggering 89.22% of MongoDB’s outstanding stock – indicative of their conviction in the company’s future success within the dynamic technological landscape.
On Thursday (reference date), MDB opened trading at $398.74 per share – a testament to its flourishing market capitalization-driven by robust institutional interest. It is worth noting that public sentiment towards MDB has been bolstered by exceptional financial performance and consistently strong fundamentals.
MongoDB is renowned for its low debt-to-equity ratio of 1.44 and robust liquidity, highlighting its financial stability. With a quick ratio and current ratio both standing at 4.19, the company boasts substantial assets to cover its liabilities efficiently.
MDB recorded a 52-week low of $135.15 and reached an all-time high of $439.00 within the same timeframe – presenting substantial growth for investors who capitalized on the stock’s climb.
Furthermore, technical analysis showcases favorable price trends for MDB with key moving averages reflecting positive momentum. The 50-day simple moving average stands at $383.81, indicating an upward trajectory in stock value in recent weeks. Additionally, the 200-day simple moving average sits at a healthy $280.54 – emphasizing the prolonged bullish trend observed over time.
In terms of corporate insider transactions, on Monday, July 3rd, CAO Thomas Bull sold 516 shares of MDB stock at an average price of $406.78 per share – amounting to a total transaction value of approximately $209,898.48. Following this sale completion, Bull now owns 17,190 shares valued at an impressive $6,992,548.20.
Moreover, CEO Dev Ittycheria sold 50,000 shares on Wednesday, July 5th at an average price of $407.07 per share – generating a substantial transaction value of $20,353,500. Considering this transaction impact; Ittycheria now possesses a significant stake in MongoDB with 218,085 shares valued approximately at $88,775,860.95.
It is important to note that these insider transactions have not deterred investor enthusiasm but rather solidified analysts’ confidence in MongoDB’s growth trajectory.
Prominent research firms have consistently rated MDB highly due to its exceptional performance and considerable potential within the database management domain. Recently analyst firms like Barclays and Royal Bank of Canada raised their price targets from $374 to $421 and from $400 to $445 respectively – recognizing MDB’s accelerated growth.
In conclusion, MongoDB’s surging stock price has captivated the attention of institutional investors and hedge funds. This influx of capital infusion has coincided with notable insider transactions that further validate the long-term confidence in the company. As a result, analysts have maintained a positive outlook on MDB, expecting it to continue its upward trajectory within the modern database industry.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

Meta has open sourced its text-to-music generative AI, AudioCraft, for researchers and practitioners to train their own models and help advance the state of the art.
AudioCraft is comprised of three different models: MusicGen, able to generate music from textual prompts; AudioGen, able to generate environmental sounds; and EnCodec, an AI-powered encoder/quantizer/decoder.
Today, we’re excited to release an improved version of our EnCodec decoder, which allows for higher quality music generation with fewer artifacts; our pre-trained AudioGen model, which lets you generate environmental sounds and sound effects like a dog barking, cars honking, or footsteps on a wooden floor; and all of the AudioCraft model weights and code
According to Meta, AudioCraft is able to generate high-quality audio using a natural interface. Furthermore, they say, it simplifies the state-of-the-art design in the audio generation field through a novel approach.
In particular, they explain, AudioCraft uses EnCodec neural audio codec to learn audio tokens from the raw signal. This step builds up a fixed “vocabulary” of music samples (audio tokens) which are then fed to an autoregressive language model. This model trains a new audio language model leveraging the tokens’ internal structure to capture their long-term dependencies, which is critical for music generation. The new model is finally used to generate new tokens based on a textual description which are fed back to EnCodec’s decoder to synthesize sounds and music.
Generating high-fidelity audio of any kind requires modeling complex signals and patterns at varying scales. Music is arguably the most challenging type of audio to generate because it’s composed of local and long-range patterns, from a suite of notes to a global musical structure with multiple instruments.
As mentioned, AudioCraft is open source, which Meta hopes can help the research community to further build on it:
Having a solid open source foundation will foster innovation and complement the way we produce and listen to audio and music in the future: think rich bedtime story readings with sound effects and epic music. With even more controls, we think MusicGen can turn into a new type of instrument — just like synthesizers when they first appeared.
While most of AudioCraft is open source, the license they chose for the model weights, CC-BY-NC, is sufficiently restrictive to not qualify as fully open source, Hacker News commenters pointed out.
Specifically, the non commercial use clause defeats point six in the Open Source Initiative definition for Open Source, which is likely explained by the fact that Meta used Meta-owned and specifically licensed music to calculate those weights. The rest of the components are instead released under the MIT license.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
In a calculated move, Westpac Banking Corp astutely reduced its holdings in MongoDB, Inc. (NASDAQ:MDB) by 2.8% during the first quarter of this year. This strategic maneuver was unveiled in the company’s most recent disclosure with the esteemed Securities and Exchange Commission, offering an intriguing glimpse into the world of financial prowess.
As per their latest report, Westpac Banking Corp had initially possessed 10,479 shares of MongoDB’s stock before deftly parting ways with 300 shares throughout the course of the quarter. Interestingly enough, these selloffs amounted to a substantial $2,443,000 worth of MongoDB’s shares within this particular period.
For those unfamiliar with the company in question, MongoDB, Inc., prides itself on providing a comprehensive general-purpose database platform that caters to a global audience. Their diverse solutions include MongoDB Atlas—a remarkable hosted multi-cloud database-as-a-service solution that has garnered considerable attention. Moreover, they offer MongoDB Enterprise Advanced—an innovative commercial database server meticulously designed for enterprise clientele looking to seamlessly operate in various environments such as cloud-based platforms or physical on-premise setups. Lastly but certainly not least, MongoDB presents Community Server—an enticing free-to-download version of their renowned database that affords developers all the essential functionality crucial for embarking upon their endeavors utilizing MongoDB’s cutting-edge technology.
Unveiling interesting ticker data from Wednesday’s trading session—shares of MongoDB commenced at an impressive $424.57 on that particular day. Enthusiastic investors keenly analyzing fundamental performance factors may note certain key figures which further outline this entity’s financial footing in the market:
– A quick ratio of 4.19.
– A current ratio mirroring precisely that aforementioned 4.19.
– A notable debt-to-equity ratio standing firmly at 1.44.
For investors fond of assessing price momentum indicators and trends over timeframes longer than fleeting moments in the market, it will be of utmost relevance to bear in mind MongoDB’s 50-day moving average price, currently observed at $381.38. This enticing statistic beautifully conveys the recent consolidation and is accompanied by a noteworthy 200-day moving average price of $279.71—signifying the company’s resilience during extended periods.
Intriguingly, MongoDB, Inc. boasts an impressive 52-week low of $135.15, further underscoring their ability to thrive amidst challenging market conditions while steadfastly delivering value to their stakeholders. Simultaneously capturing the curiosity of seasoned investors and chance observers alike is their impressive 52-week high—an astounding $439.00—a figure that unapologetically showcases MongoDB’s capacity to attain and transcend record-breaking milestones.
Such intriguing disclosures and nuanced movements in the financial landscape truly encapsulate the unpredictable nature of markets, where giants like Westpac Banking Corp quietly enter the stage, leveraging their financial prowess and calculative methodologies for optimal returns on investments. Meanwhile, entities like MongoDB continue to disrupt market segments with cutting-edge solutions adored by global clientele—strengthening their foothold with every transaction.
As we delve deeper into August 2023—a time brimming with limitless possibilities—the complex interplay between investment brilliance and transformative technological innovations promises unprecedented outcomes for companies navigating these dynamic ecosystems. Only time will reveal the remarkable developments yet to unfold within this captivating narrative permeating across industries far and wide.
Institutional Investors and Hedge Funds Make Significant Changes in MongoDB Ownership
August 2, 2023 – MongoDB, Inc. (NASDAQ:MDB) has recently experienced significant changes in the positions of institutional investors and hedge funds. In the fourth quarter of last year, 1832 Asset Management L.P. raised its position in MongoDB by a staggering rate of 3,283,771.0%. This increase resulted in the acquisition of an additional 1,017,969 shares, bringing their total ownership to 1,018,000 shares valued at $200,383,000. Renaissance Technologies LLC also saw a notable boost in their position, with an increase of 493.2% during the same period. Their holdings now amount to 918,200 shares worth $180,738,000.
Notably, Norges Bank entered as a new stakeholder in MongoDB during the fourth quarter with a purchase worth $147,735,000. Another significant change was witnessed from William Blair Investment Management LLC’s increase in position by an astonishing 2,354.2%, bringing their total ownership to 387,366 shares valued at $76,249,000.
First Trust Advisors LP also made significant moves during this time frame with a position increase of 72.9%, resulting in the acquisition of an additional 258,783 shares valued at $120,935 ,000.
It is interesting to note that hedge funds and other institutional investors currently own approximately 89.22% of MongoDB’s stock.
In other news related to MongoDB’s financial activities and management personnel changes experienced within the organization,Dwight A.Merriman ,a Director at MongoDB was involved in several stock sales transactions during July .On July18th Merriman sold off1 ,000sharesworth$420 per s hareamountingtoa total transaction costof$420 ,000 . Followingthis sale,the Directors’ inventorycountwould dropto1 ,213 ,159shares withthe currentvalueestimatedat $509 ,526,780 .Itis pertinenttonotethatthis salewascompletedinaagreementthat hasbeenproperly disclosed in legal filings available for public perusal through the SEC(US Securities and Exchange Commission).
Furthermore , two additional sales transactions were also reported during July18th where Merriman onceagain sold1 ,000 shares of MongoDB stocks at an average priceof $420 per share amounting toa total of$420,000. Consideringthe completion ofthesetransactions,the Directors holding of MongoDB stock is currently quantifiedat 1 ,213 ,159 shares valued at $509,526,780 . Finally, Merriman made another transaction on July 10th resulting in him selling606sharesworthanaveragepriceof$382 .41for a totalcost of$231 ,740.46.With thistransactioncompleted,the Directors holdings now stand at a total value shownby a calculation ofAround $464 million based on the recenttotal number of MongoDB stocks held by him which is estimated to be about 1,214,159.
Based on these reports,it canbeobserved that insiders have sold offsharesin themagnitudeof116417stocksfrom March torealizerevenuesestimatedtoreach $41 million.Inrelationtothe current shareholdingsofinsiders,itcanbecalculatedthat theyownapproximately4.80%ofMongoDBstock .
MongoDB Inc. operates as an international provider of general-purpose database platforms whose services are accessible virtually worldwide. The firm offers various packages such as MongoDB Atlas which is primarily aimed at multiple cloud solutions acting as a database “as-a-service”. Furthermore, there is MongoDB Enterprise Advanced which presents commercial database servers designed specifically for major enterprise clients who may choose to situate their databases either in a hybrid environment or in an on-premise/cloud-based setting.MongoDB Community Server represents yet another membership package variation,differentiated mainly by pricepoint and inclusion of several basic functionalities.Worthy to note is that, the company more recently posted its earnings results for the second quarter on June 1st where it recorded an earnings per share (EPS) value of $0.56.This result exceeded expectations as analysts had predicted figures around the range of $0.18.
Despite posting a negative net margin and seeing a negative return on equity in its recent quarterly report, MongoDB’s revenue for the same period went up by a significant 29% compared to last year.An analysis of estimates from equities research analysts predict that MongoDB, Inc. will post earnings per share values of -2.8 throughout this fiscal year.
Several brokerages have been evaluating MongoDB’s performance recently. Tigress Financial revised their price target from $365.00 to $490.00 in a research report dated June 28th whereas William Blair reaffirmed their “outperform” rating on MongoDB shares in a study released on June 2nd.Guggenheim however lowered MongoDB rating from “neutral”to “sell”; despite this downgrade Guggenheim raised price targetsfrom$205 to approximately $210.Ina similar vein,Morgan Stanley adjustedits own price target by increasing itfrom $270 to reach around $440.Additionally ,analysts at Oppenheimer
Article originally posted on mongodb google news. Visit mongodb google news