Presentation: Adventures in Performance

MMS Founder
MMS Thomas Dullien

Article originally posted on InfoQ. Visit InfoQ

Transcript

Dullien: My name is Thomas Dullien. Welcome to my talk, adventures in performance, where I’ll talk about all the interesting things I’ve learned in recent years after switching my career from spy versus spy security work, to performance work. Why would anybody care about performance? First off, there’s three economic trends or three macro trends that are really conspiring to make performance and efficiency important again. The first one is the death of the economic version of Moore’s law, which means that new hardware does not necessarily come with a commensurate speedup anymore as it used to in the past. The second one is the move from on-premise software to SaaS, where the cost of computing is now borne by the vendor of services and the vendor of digital goods, which means computational efficiency, which used to be borne by the customer is nowadays paid for by the provider, which means inefficiency cuts straight into gross margins, and through that into company valuations. Lastly, the move from on-premise hardware to cloud, where as you go into a pay as you go scheme, if you find a clever way of optimizing your computing stack, you realize savings literally the next day. All of these things are coming together to push performance from something that was a bit of a nerdy niche subject back into the limelight. Efficiency is becoming really important again. With the cost of AI, and so forth, this is only going to continue in the next years. Here’s a diagram of single core speedup. Here’s a diagram that shows the relative cost per unit per transistor over the different process nodes in chip manufacturing. We can see that the cost per unit is just not falling at the same rate that it used to fall anymore. Lastly, I mentioned gross margins and company valuation. You can see in this slide that there’s a reasonably linear relationship between gross margins and company valuations for SaaS businesses, which means improving your gross margins will add multiple of your revenue to your overall company valuation.

Why Care About Performance? (Personal Reasons)

Aside from the business reasons, what are personal reasons why anybody would care about performance? For me, throughout my entire career, there’s always been the issue that was difficult for me to align what I found technically interesting, what I found economically viable, and align that with what I found ideologically aligned with my values. One of the things that I really like about performance work is, it tends to be technically interesting, because it’s full stack computer science. It tends to be economically viable because if I make things more efficient, I save people money, and they’re usually willing to pay for that. Lastly, it’s ideologically aligned because I’m working on abundance, meaning I’m trying to generate the same amount of output for society with fewer inputs, which is something that aligns with my personal value set. I prefer not to work on scarcity, and I prefer not to work on human versus human zero-sum games.

My Path: Spy vs. Spy Security (Performance Engineering)

My personal path goes from essentially spy versus spy security to performance engineering. The interesting thing here is both are full stack computer science, meaning in both cases, you get to look at the high-level design of systems, you get to look at the low-level implementation details. All of that is relevant for the overall goal, meaning security or insecurity on the one hand, and performance on the other hand. In both situations, you end up analyzing very large-scale legacy code bases. In the security realm, if you find a security problem, you’ve got to pick your path. You’ve got the choice of selling the security problem that you found to the highest bidder, which then risks getting somebody killed and dismembered in some embassy somewhere. Or you need to be prepared to be the bearer of bad news to an organization and tell them, can you please fix this? Usually, nobody is happy about this, because security is a cost center and interferes with the rest of the business. The advantage of doing performance work is it’s got all the technical interestingness of security work. If you find a problem, you can fix it, the resulting code will run faster, cheaper, and eat less energy. It’s much less of a headache at that point when security work.

Here’s a map with a very convoluted path. This slide is here to tell you that the rest of this talk is going to be a little bit all over the place. Meaning, I’ve gathered a whole bunch of things that I’ve learned over the years, but I haven’t necessarily been able to extract a very clear narrative yet. I’ll share what I’ve learned. I’ll ask for forgiveness that the overall direction of this talk is a much less clear narrative than I would like it to. Let’s talk about the things I’ve learned. There’s four broad categories in which I learned lessons over the last years. One is the importance of history when you do performance work. Then there is a category of things that I learned about organizational structure, and organizational incentives, and so forth, and performance work. There’s a bunch of technical things I learned, of course. There’s a few mathematical things that I learned and underappreciated when I started out doing performance, and we’ll walk through all of them.

Historical Lessons

We’ll start with the first lesson, which is, the programming language you’re most likely using is designed for computers that are largely extinct. The computers you’re using these days were not the computers that this language was designed for. To some extent, it is misdesigned for today’s computers. I’ll harp a little bit about Java as a programming language. The interesting thing is, if you look at the timeline of the development of Java, the Java project was initiated in 1991 at Sun. It was released in 1995, which means it started prior to the first mentioning of a term called the memory wall. The memory wall is a discrepancy between increases in CPU speed over the years or in logic speed, and the speed with which we can access memory from DRAM. At the time, 1994 was the first time when people observed that the growth rates for memory and for logic, the speedup growth rates were not the same, and that this would lead to a divergence, and that this would lead to problems down the line. By the time Java was released, this was not really an issue yet. Now it’s a couple decades later, and the memory wall rules everything when it comes to performance work. If you have to hit DRAM, you’re looking at 100 to 200 cycles. You can do a lot of computation in 100 to 200 cycles on a modern superscalar CPU, so hitting memory is no longer a cheap thing.

This interestingly led to a few design decisions that were entirely reasonable when Java was first designed, that turned out to be not optimal in the long run. The biggest one of these is the fact that it’s really difficult in Java to do an array of struct. Meaning, in C, it’s very easy to do an array of structs that have equal stride and that are all contiguous in memory next to each other. At least out of the box, Java has no similar construct and actually not impossible but reasonably difficult to get a similar memory layout. If you look at the object array, instead of having all the data structures consecutively in memory, you have an array of pointers, pointers to the actual object. Which means traversing that array implies one extra memory dereference per element, which then is bound to cost you between 100 and 200 cycles extra depending on what’s in cache and what isn’t.

There’s a number of assumptions that were baked into the language when it was designed. Traversing a large linked graph structure on the heap is a very reasonable thing to do. Garbage collection is nothing but traversing a large graph. Clearly, the calculus for this changes once you hit something like a memory wall, or once you have to face something like a memory wall. The other assumption was that dereferencing a pointer does not come with a significant performance hit, meaning more than perhaps one or two cycles. These were all entirely correct assumptions in 1991 and they’re entirely wrong today. That’s something that you end up paying for in many surprising parts of infrastructure. Why do I mention this? The reason I mention this is because it means that software is usually not designed for the hardware on which it runs, meaning, not only is your application not designed for the hardware on which you’re running, your programming language wasn’t designed for the hardware in which you’re running. Software tends to outlive hardware by many generations.

Which brings me to the next topic, your database and your application are certainly not designed for the computers that exist nowadays and are likely designed for computers that are also extinct. I’ll talk a little bit about spinning disks and NVMe drives, because they’re very different animals and people don’t appreciate just how different they are. If you look at a spinning disk, you’ve got 150 to 180 IOPS, which means you can read perhaps 90 different places on the disk per second, perhaps 100. Seeks are very expensive. You physically have to move a hard disk arm over a spinning platter of metal. Data layout on the disk, where precisely it is physically located matters because seek times depend on the physical distance that the driver has to travel. Latency for seeking and reading is in the multiple millisecond range, which means it can afford very few seeks and reads if you need to react to a user clicking on something. Most importantly, there is very little internal parallelism, meaning only a few seeks in the I/O queue are actually useful, meaning the drive can usually only read one piece of data at a time. The other last thing to keep in mind is multiple threads contending the same hard disk will ruin performance, because if multiple threads are trying to read two different data streams, the end result will be that the hard disk has to seek back and forth between those two areas on the disk. That’s just going to be terrible for throughput.

If you look at a modern SSD drive, or a modern NVMe drive, they’ve got 170,000 IOPS, which is more than 1000 times more. It’s still 11 times more than Amazon’s Elastic Block Storage. It’s just entirely different categories, three orders of magnitude more than a spinning disk. Seeking is really cheap. NVMes have internal row buffers, which means that you may actually get multiple random accesses for free. They have near-instant writes because the writes don’t actually hit disk [inaudible 00:11:44], but they’re buffered by internal DRAM. The latency for a seek and read is under 0.3 milliseconds, but they also require a significant amount of parallelism to actually make use of all these IOPS. Because you’ve got a certain latency for reading things, you have to have many things in flight, many requests in flight, to actually hit peak performance, often on the order of 32, or 64, or even higher I/O requests at the same time to really hit peak performance on NVMe.

That leads me to the way that many storage systems are misdesigned for an era which is no longer a reality. What I’ve seen repeatedly is storage systems that feature a fixed size thread pool, let’s say in the number of cores and so forth, approximately, to minimize contention of threads to cores, but also to minimize contention on the actual underlying hard disk. What I’ve also seen then is you combine the fixed size thread pool with Nmap. You map files from disk into memory and rely on the operating system and the page cache to make sure data gets shuffled between disk and memory. You rely on reasonably large reader heads, because you want to get a lot of data into RAM quickly because seeking on disk is so terribly slow on a spinning disk. These things make some sense provided you’re on the spinning disk. The reality is that on a modern system, you’ll find the strange situation that you hit the system, and the system doesn’t max the CPU. It doesn’t max the hard disk. It just sits there. Because you end up exhausting your thread pools while all the threads are sitting there waiting for page faults to be serviced. The issue really is that page fault is inherently blocking, and then it takes 0.3 milliseconds to handle it end-to-end, which means the thread resumes after 0.3 milliseconds, which means that a single thread can only generate about 3000 IOPS, but you need 170,000 to actually hit the max capacity of the disk, which means to saturate all the IOPS the drive can give you, you will need 256 threads hitting page faults constantly all the time. The upshot of this is that if you do any blocking I/O, given the internal parallelism of modern NVMe drives, your thread pools are almost certainly going to be sized too small. The end result will be a system that is slow without ever maxing out CPU or the disk. It will just spend most of its time idle.

Modern SSDs are real performance beasts, and you need to think carefully about best way to feed them. What about cloud SSDs? Cloud SSDs are an interesting animal because if you look at it, cloud attached storage is a different beast from both spinning disks and physically attached hard disks. Because spinning disks are high latency, low concurrency. A local NVMe drive is low latency, high concurrency. Network attached storage is reasonably high latency because you’ve got the round trip to the other machine, but it can have essentially almost arbitrary concurrency. It’s interesting that very few database systems are optimized to operate on the high latency near limitless concurrency paradigm. What’s interesting about us as a software industry is, we seem to expect that the same code base and the same architecture and the same data structures that are useful in one scenario should be useful in all three. We’re really asking the same software to operate on three vastly different storage infrastructures. That’s perhaps not a reasonable thing to do.

Technical Lessons

Another important technical lesson I’ve learned, libraries dominate apps when it comes to CPU cycles. What I mean with this is, the common libraries in any large organization are going to dominate the heaviest application in terms of the amount of time spent there. Let’s imagine you’re Google, or Facebook, or Spotify, or anybody else that runs a large fleet and runs many services, the cost of the biggest service is going to be nowhere close to the aggregate cost of the garbage collector, or your allocator, or your most popular compression library, because these libraries end up in every single service. Whereas the service itself is going to be fragmented all over the place, meaning you’ll have dozens or hundreds or thousands of services. In almost every large-sized organization I’ve seen, once you start measuring what is actually driving cost in terms of compute, it is almost always a common library that eclipses the most heavyweight application. At Google, while I was still there, there was a significant number, I think 3% or 4%, of the overall fleet was spent on a single loop of FFmpeg. That single loop was later on moved into hardware and to a specialized accelerator. The point is that, if you start profiling across a fleet of those services, it’s very clear that your Go garbage collector is going to be bigger than any individual application.

That’s quite interesting, because it means that there’s room for something that I’ve long wanted to have. My previous startup would have liked to do that if we had continued existing in the way that we wanted to, or that we assumed we would. The dream was to be a global profiling Software as a Service, which means that we would have data about precisely which open source software is eating how many cycles globally. Then you could do performance bug bounties on GitHub. Going around and saying, “This is an FFmpeg loop, we estimate that the global cloud cost of this loop, or the global electricity cost of this loop is $20 million. If you managed to optimize it by a fraction of a percent here, you’ll earn $50,000.” The interesting thing there is you could generate something where individual developers are happy because they get paid to optimize code. The overall world is happy because things get cheaper and faster. Unfortunately, we’re not there. Perhaps somebody else will pick this up and run with it.

Another important technical thing I learned but I’ve underestimated previously is, garbage collection is a pretty high tax in many environments. People spend more money on the garbage collector than they initially think. I think the reason for this is, first of all, common libraries dominate individual apps all the time. Garbage collection is part of every single app, given a particular runtime. Garbage collection is reasonably expensive, because traversing graphs on the heap is bad for data locality, which then makes it heavier than many people think. It is very common in any infrastructure to see 10% to 20% of all CPU cycles in garbage collection, with some exceptional situations where you see 30% spent on garbage collection. At that point, you should start optimizing some of the code. I was surprised, before I started this, I wouldn’t have bet that garbage collection is such a large fraction of the overall compute spend globally.

One thing I also found extremely surprising is that, in one of the chats I had with our performance engineers, somebody told me, “Whenever we need to reduce CPU usage fleetwide, what we do is memory profiling.” As a C++ person, I heard that, I was like, that is extremely counterintuitive. Why would you do memory profiling in order to reduce CPU usage? The answer to this is, in any large-scale environment where all your languages are garbage collected, the garbage collector is going to be your heaviest library. If you reduce memory pressure, you’re automatically going to reduce pressure on CPU across the entire fleet. They would focus on memory profiling their core libraries to put less stress on the garbage collector, and that would overall lower their CPU usage and billing. That was a surprising thing that I had not anticipated.

It turns out that pretty much everybody becomes an expert at tuning garbage collectors, if they are tunable. It’s also interesting that a lot of high-performance Java folks become experts at avoiding allocations altogether. I just saw a paper from the Cassandra folks where they praised the new data structure partially because it could be implemented by doing only a few very large allocations that can get reused between runs. It’s also interesting, you see Go engineers become surprisingly adept at reading the output of the compiler escape analysis to determine whether certain values are allocated on the stack or on the heap. What’s interesting there is I think the big pitch for garbage collection was, don’t worry about memory management, the garbage collector will do it for you. It’s interesting to see that in the end, if you care about performance, you don’t need to worry about memory management. You may still use a garbage collector, or you may try to use a language that provides safety without a garbage collector. Either way, you will not escape having to reason about memory management.

Organizational Lessons

One of the big lessons that I took away is that your org chart and the way you organize your IT matters quite a bit for being able to translate code changes into savings. The reason I say this is that pretty much all large organizations have either a vertical or horizontal emphasis. They all have both a vertical and horizontal structure. Google, for example, is a very vertical organization. Meaning, you’ve got the infrastructure team. You’ve got a big monorepo. You’ve got very prescriptive ways of doing things. You’ve got a very prescriptive tech stack, essentially where Google tells the engineers, if you need key value, use Bigtable. For every if-else, there’s a clearly prescribed solution for how to do it. All of it lives in a big monorepo. These are big services that are shared across the entire company, which are the vertical stripes in this diagram. Then, every project which is a horizontal stripe, picks and chooses from these different vertical stripes, and assembles something from it.

Then there’s other organizations which are much more horizontally oriented, where you have separate teams, two-pizza teams, or whatever, that may have a lot of freedom in what they choose. They may choose on their own database. They may choose their own build environment. They may choose their own repository, and so forth. While this is excellent for rapid iteration on a product and keeping the teams unencumbered from what some people would perceive red tape, it’s also not necessarily ideal for realizing savings across the board. What I’ve observed is that vertical organizations are much better at identifying performance hogs, which are usually a common library, then fixing that library and then reaping the benefits across the entire fleet. If you do not have an easy way, or a centralized repository, at least of your common artifacts, it’s much harder to do that. Because if you identify, for example, that you really should be swapping an allocator against another one, or you have a better compression library, you will now walk around the organization trying to get a large number of teams to perform a change in their repository. That gets much more strenuous, which means the amount of work you have to do to realize your effects is much higher.

Another really surprising thing that I learned was that companies cannot buy something that has net negative cost, and companies are really avoidant to buy savings. What I learned is, when we started on this entire journey, we initially thought that we’re going to cut off savings work, meaning we would offer to people that we’ll look at your tech stack, we’ll work with you to improve the tech stack, for free, but we would like to get a cut of the savings that we realize over the next couple months. It turns out that this looks really sensible from a technical and economic perspective, it is almost impossible to pull off in the real world, largely because neither the accounting nor the legal department are set up to ever do anything like this. Meaning accounting doesn’t really know how to budget for something that has a guaranteed net negative cost, but at the same time an unknown size outflow of money at some point. The legal department cannot put a maximum dollar value on the contract and is worried about arguments about what the actual savings are going to be. The experience here has been that after a couple months of trying to do this and succeeding with only tiny players. A friend of mine that has done professional services for 20-plus years told me, “You may be able to do this if you’re Bain, and if you’ve played golf with a CEO for 20 years, but as a startup, this is really not something that big companies will sign up for.” For me, as a technical person, it was somewhat counterintuitive that large enterprises cannot just buy savings because they’re very happy to the buy products in the hope of realizing savings. A contract that guarantees them savings in return for a cut off the savings is something that is too unusual to actually be purchased. That was definitely a big lesson for me to learn, and a surprising lesson.

Another organizational thing that I learned, that I found surprising and interesting is the tragedy of the commons of profile guided optimization and compilation time. What’s happening here is that, essentially, nobody likes long compilation times. The Go developers, for example, are very keen on keeping Go compilation times really low. People that build your upstream Linux packages have limited computational resources, and they don’t really want to spend a lot of time compiling your code either. This at one point led to a situation where the Debian upstream Python was compiled without profiling guided optimization enabled, which meant everybody running Debian derived Linux distribution, or of Ubuntu and so forth, everybody was paying 10%, 15% in extra performance for every line of Python executed globally. Largely because the people building the upstream package didn’t want to incur the extra time it took to build the PGO optimized Python builds, because it takes like an hour to build. What we see here is a tragedy of the commons. Because for many common libraries, if the library runs on 1000 cores for 1 year, increasing the performance by 1%, is worth 10 core years, meaning you could spend a lot of time compiling that library if you can eke out a 1% performance improvement.

There’s an argument to be made that it would make global economic sense to pick certain really heavyweight libraries and try to apply the most expensive compile time optimizations you can possibly dream up to them. It doesn’t matter if it ends up compiling for 2 weeks, because the global impact of the 1% saving will be much higher than a week of hardcore computation to compile it. We end up with this tragedy of the commons where nobody has an incentive to speed up the global workload.

This is made worse by the fact that on x86, everybody compiles for the wrong microarchitecture, because the upstream packages are all compiled for the lowest common architecture denominator. Almost certainly, your cloud instance has a newer microarchitecture. You can often get a measurable speedup by rebuilding a piece of software precisely for your uArch. The issue is that Linux distributions don’t necessarily support microarchitecture specific packages. Cloud instances don’t clearly map to microarchitectures either. What we have here is a global loss of CPU cycles and a global waste of electricity, caused by the fact that the code is always going to be compiled for slightly the wrong CPU. It’s interesting if you think about all the new Arm server chips. Perhaps one of their advantage is that, in general, your code will be compiled for the right microarchitecture.

Mathematical Lessons

Mathematically speaking, the biggest thing that I learned here is that even for me as a trained mathematician, benchmarking performance is a statistical nightmare. Every organization that cares about performance should have benchmarks as part of their CI/CD pipeline. These benchmarks should in theory, be able to highlight performance regressions introduced by a pull request. In theory, organizations should always have an awareness about what changes lead to performance deteriorations over time, and so forth. In theory, answering the question, does this change make my code faster, should be easy because it’s classical statistical hypothesis testing. In practice, it is actually extremely rare to see an organization that runs statistically sound benchmarks in CI/CD, because there are so many footguns and traps involved with it.

The very first thing that I underappreciated initially is, variance in performance is actually something that is your enemy. Because if you try to make a decision like, does this code change make my code faster? If you have extremely high variance performance, you need many more measurements to actually make that decision. The upshot of this is, if you tolerate high variance in your performance to begin with, any benchmarking run to determine whether you’ve improved things in terms of overall performance is going to take longer because you need more repetitions before you’ve got enough data to make that determination. Another problem that you run into is the abnormality of all distributions that you encounter in performance work. I sometimes look at the distributions that I see in practice and I want to yell at a statistician, does this look normal to you? The main issue is that, if you deal with really unusual distributions that you have difficulty in modeling parametrically, and performance work almost always deals in these distributions. You’re very quickly at a point where the statisticians tell you, ok, if that is really the case, we have to say goodbye to parametric testing and we have to resort to nonparametrics, which are, like a friend of mine that is a statistics professor, call them, methods of last resort. Unfortunately, in performance, the methods of last resort are usually the only result you’ve got.

Another problem that you run into is, the CPU internals that you face when you run your benchmarks mean that your tests are not identically distributed in time. Modern CPUs have sticky state, which means your branch predictor will be trained by a particular code path taken, which means your benchmarks will vary in performance, based on whether they run for the third time or for the fifth time. That’s very difficult to get rid of. One of the solutions is doing random interleaving of benchmark runs where you do one benchmark run and then you run a different one, and so forth. You still have to contend with things like your CPU clocking up and down, and sometimes architectural things. There was an entire generation of CPUs, where, if any of the cores switched to using vector instructions, all the other cores would be clocked down. You have all these noisy things that destroy your independence across time. Then you’ve got all the mess created from ASLR, caches, noisy neighbors on cloud instances, and so forth. Meaning, first of all, depending on the address space layout of your code, you may actually get almost 10% noise in your performance measurements, just from an unlucky layout. You can have noisy neighbors and cloud instances that are maxing out the memory bandwidth of the machine, stalling your code. There can be all sorts of trouble that you did not anticipate.

If you start controlling for all of these, meaning you run on a single tenant, bare metal machine. You disable any frequency boosting and so forth, and you try to really control the variance of your measurements, you end up in a situation where your benchmarking setup is becoming more different from your production setup over time. Then the question becomes, if your benchmarking measurement is really not representative of production, what are you benchmarking for? In the end, the end result of this is that approximately, almost nobody has statistically reliable benchmarks that show improvement or regression on CI/CD. Because, in many situations, running enough experiments on each commit, in order to establish that this is actually making something faster, with a confidence interval that is meaningful, is often prohibitive, because you need too many samples. This doesn’t seem to bother anyone. In the end, there’s a few people that have done fantastic work on this. MongoDB have written a great article about their struggles with change point detection. ClickHouse has written a great article about how they control for all the side effects in the machine and all the noise on the machine. The trick is relatively clever, they run the same workload on the same machine at the same time, like the A and B workload, arguing that at least the noise is going to be the same for both runs. If you really want to get into nonparametric statistics for the sake of performance work, there’s a fantastic blog by Andrey Akinshin that works at JetBrains. I can much recommend it. It’s heavy, but it’s great.

Concrete Advice – What Are the Takeaways from All These Lessons?

After all of this, what’s concrete advice from all of these anecdotes? What are the takeaways from all of the lessons I learned? On the technical side, it is crucially important as a performance engineer to know your napkin math. Almost all performance analysis and every performance murder mystery begins by identifying a discrepancy between what your napkin math and your intuition tells you about what should happen, and what happens in a real system. This should not be slow, it’s the start of most adventures. A surprising number of developers have relatively poor intuition for what a computer can do. Knowing your napkin math will help you figure out how long something should take approximately. Another important thing on the technical side is you will, as a performance engineer have to accept that tooling is nascent and disjoint. I ended up starting the company because I needed a simple fleet-wide CPU profiler. You are going to be fighting with multiple, poorly supported command line tools to get the data that you want. The other big takeaway is, always measure. Performance is usually lost in places that nobody expects. Performance issues are almost always murder mysteries. It’s very rarely the first suspect that ends up being the perpetrator of a problem. Measuring, it’s very scientific, in a way, it’s very empirical. I quite like that about that work. Another thing is, there’s a lot of low-hanging fruit. Most environments have 20% to 30% of the relatively easy wins on the table. The reality is that in a large enough infrastructure, that is real money. That’s real, demonstrable, technical impact.

On the organizational side, if you were to task me with introducing a program to improve the overall efficiency in terms of cost, one of the most important things to do is trying to establish a north-star metric, which means for a digital goods provider, you want to work towards something that approximates the cost per unit served. Let’s say you’re McDonald’s. McDonald’s has a pretty clear idea of what the input costs for hotdogs are and what the output costs are. They can drive down the input costs. Similarly for software, if you’re a music streaming service, you would want to know, what’s the cost of streaming one song? If you’re a movie streaming service, what’s the cost of streaming a movie, and so forth? Trying to identify a north-star metric that is unit cost is really the most effective step that you can take. Once you have that, things will fall into place somewhat magically, I think. What I’ve observed is that dedicated teams can have pretty good results at a certain size. Prior to that, just doing a hackathon week can be pretty effective. During a hack week, you focus on identifying and fixing low-hanging fruit. Usually, the results of such a hack week can be then used as a leverage or as an argument why a dedicated team is a sensible thing to do. If you’ve established a north-star metric, the return of investment on such a team or on a hack week are going to be measurable and very visible and very easily communicated to the business decision makers. Which lastly leads me to the fact that very few areas in software engineering has such clear success metrics as performance work. I like the clarity of the game, like nature of it, because you get to really measure the impact you have. I also predict that the organizational importance of performance and efficiency is only going to grow over time. We’re facing a period of DRAM money. It’s just going to be harder to get investment capital, and so forth. Margins do matter these days. I would not be surprised if over the next year’s efficiency of compute becomes a board level topic on par with security and other areas.

On the mathematical side, my advice is, unfortunately, most likely methods of last resort are going to be your first resort. Reading up on nonparametric statistics is a good idea. Because you are in a nonparametric place, you will need to deal with the fact that your tests are going to have relatively low statistical power, which means you’ll need a certain number of data points before you can make any firm conclusions. That means there is a cost for scientific certainty in terms of compute. You need more benchmarking runs, more data points, and you need to weigh carefully when that is warranted and when it isn’t.

Historically, the advice would be, always keep yourself up to date in terms of drastic changes in computer geometry. A thousandfold increase in IOPS, a change in cost for a previously expensive operation, like integer multiplication used to be a very expensive thing to do, nowadays it’s very cheap to do so you can use it in hash functions easily. Changes in the state of the art for various data compression things, if you look at zstd versus zlib. These are the changes that have drastic multi-year impact on downstream projects. You can still get reasonable performance improvements by switching your compressor to something else. You can still get performance improvements by making use of certain structures that are nowadays cheap, and so forth. Keeping yourself up to date in terms of what has recently changed in computing is a very useful skills, because odds are, the software has been written for a machine where a lot of things were true that are no longer true. Being aware of this will always find you interesting optimization opportunities. The other thing to keep in mind is code and configurations outlive hardware, often by decades. Code is strangely bimodal, in the way that it either gets replaced relatively quickly, or it lives almost forever. Ninety percent of the code that I write is going to be gone in 5 years, and 10% will still be there in 20 years. You would assume that pretty much all parameters in any code base for tuning anything and that haven’t been updated in 3 years, are likely going to be wrong for the current generation of hardware.

Technical Outlook

Where are we going? Where should we be going? If I could dream, what tools would I want? I’ll dream a bit and wish things that I don’t yet have. The one thing that I’ve observed is that diagnosing performance issues requires integrating a lot of different data sources at the moment. You use CPU profilers and memory profilers. You use distributed tracing. You use data from the operating system scheduler to know about threads. Getting all that data into one place and synchronizing it on a common timeline, and then visualizing it, all of that is still pretty terrible. It’s disjoint. It’s not performant. It’s generally janky. There’s a lot of Bash scripting involved.

The tools I wish I had. There’s different tools for different problems. For global CO2 reduction, I really wish I had the Global Profiling Software as a Service database with a statistically significant sample of all global workloads, because then we would do open source performance bug bounties, and we could all leave this room with a negative CO2 balance for the rest of our lives. That would have more impact than many individual decisions we can take. For cost accounting, I would really like to have something that links back the usage of cloud resources like CPU, I/O, network traffic, back to lines of code, and have that integrated with a metric about units served so we can calculate the cost breakdown. Literally, for serving this song to a user, these are the lines of code that were involved and these are the areas that caused the cost, and this is the cost per song. For latency analysis, I would really like to have a tool that combines CPU profiles and distributed tracing and scheduler events, all tied together into one nice UI, and deployable without friction. In the sense that you would like to have a multi-machine distributed trace for a request that you send. Then, for each of the spans within this request, you would like to know what code is executed, and which parts of the time is the CPU actually on core, and which parts of the time is it off core across multiple machines. Have all of that visualized in one big timeline so we can literally tell you, out of these 50 milliseconds of latency, this is precisely where we’re spending the time on which machine, doing what.

Lastly, I would really like to have Kubernetes cluster-wide causal profiling. There’s some work on causal profiling. There’s a tool called Coz, which is really fascinating. These things are still very much focused on individual services. They’re also not quite causal enough yet in the sense that they can’t tell me if my latency is due to hitting too many page faults. What I would really like is something that you install on an entire Kubernetes cluster, or wherever other cluster software you use. Then you can ask a question like, this is a slow request sent to the cluster, why is it slow? Why is it expensive? Then have some causal model of, this is expensive because machine X over here took too much time servicing this RPC call, because it took too much time servicing this page fault, for example. I’m not sure how to get there. We’ve had pretty dazzling advances in large language models in recent months that can now tell me something non-trivial about my code. Perhaps we will get to something that can tell me something non-trivial about the performance of my software. One big hurdle with all performance tooling that I’ve observed is deployment friction. Most production environments need some sort of frictionless deployment. Ideally, you’re deploying on the underlying node and there’s no instrumentation of the software necessarily. The moment that any team outside of your Ops team has to lift a finger, your tool becomes drastically less useful.

What’s Next?

What is next in regards to performance work? I’ve been wandering on the landscape for a bit. I’ve learned a bunch of things. I’m excited what comes next, but I have no idea what that will be.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Building and Nurturing Great Teams

MMS Founder
MMS Nick van Wiggeren

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Before we get into today’s podcast, I wanted to share that InfoQ’s International Software Development Conference, QCon, will be back in San Francisco from October 2 to 6. QCon will share real world technical talks from innovative senior software development practitioners on applying emerging patterns and practices to address current challenges. Learn more at qconsf.com. We hope to see you there.

Good day, folks. This is Shane Hastie from the InfoQ Engineering Culture Podcast. Today I’m sitting down with Nick van Wiggeren. Nick is the vice president of Engineering at PlanetScale. And that’s about all I know about Nick at this point, so I’m going to go straight across to Nick. Welcome, thanks for taking the time to talk to us today.

Introductions [00:51]

Nick van Wiggeren: Thank you so much for having me. My name is Nick Van Wiggeren. I reside in Seattle, Washington, and I am indeed the vice president of Engineering at PlanetScale. So you’re one for one so far. But a little bit about myself. I’ve been leading engineering teams now for the better part of a decade, focusing a lot on infrastructure and cloud infrastructure as a service. I’ve built clouds at companies like DigitalOcean. I’ve built storage products on top of those clouds, and now I’m hopefully completing a bit of an arc at building up the stack all the way up to some more customer facing stuff with databases. So my world has been infrastructure and my world has mostly been building teams to build infrastructure and make sure they’re rock solid.

Shane Hastie: So that building teams that I really like to explore when we were put in touch, that was the thing that came through about your experience and background that I wanted to delve into. So what does it take to put together a great team?

What does it take to put together a great team? [01:45]

Nick van Wiggeren: That’s a fantastic question. I think that anyone who has a book to sell you or a course to sell you or a guidebook that you can follow to assemble a team by checking a bunch of boxes, they don’t know what they’re talking about, because it’s a million different small things that combine to a cohesive group of people that work well together, that understand each other, that feel comfortable with each other, and I think most importantly, know what they’re here to do. People are, I think, a bit shy of making sports team analogies these days, but I think there’s a lot of similarity between the two. What you need are a group of complimentary people, a clear mission and a clear leader to give them that mission and help them along that mission, and then that leader needs to be able to step back and let that team shine.

Shane Hastie: Step back and let the team shine. We’re talking here about autonomy. In my experience, there’s often two challenges here. One is the team members who are uncomfortable with having that autonomy and often the leader who’s unable to let go. And the two go very often hand in hand. So if I’m trying to create this empowered autonomous environment, where do I start?

Empowerment needs an environment where people can be supported [02:54]

Nick van Wiggeren: That’s a fantastic question. And you mentioned a key word in this, which is empowerment. I had a boss a few years ago who made a great analogy. He had just had a kid, so a newborn baby, maybe a month old, and he said he could empower that newborn baby to start walking and hold its head up and all of that, but what good would that do? Baby was still going to be sitting there unable to do all of those things. True empowerment actually means doing the work and building out the structure so that somebody can accomplish what they need to, and I think that’s the job of the leader. I think a lot of people assume delegation, empowerment, autonomy is hoisting a problem onto someone else and then letting them solve it. And yes, sure, of course, you do have to give people problems and challenges, but true empowerment actually means running in front, building the situation where people can make decisions on their own, building the framework for people to understand how they’re going to be viewed as successful and what their goals are.

And so I think a lot of people assume they just get to take a load off or it’s just putting people on the right spot and letting them shine. But it’s a lot more than that. It’s preparing the culture in an organization to give people that time to shine. It’s preparing the whole company to understand where decisions are made because some of the most confusing moments I’ve been in are when people think they’re empowered and then they run up against the brick wall of their manager or their product manager or their CEO actually says, “Nope, this is my decision and not yours. Thanks for the input. I’m going to go make this.” And so that clarity of expectation and that clarity of execution is what you need to feed into creating autonomous people and autonomous teams.

Shane Hastie: And what does that look like?

Nick van Wiggeren: As a leader, I think a lot of it looks like being explicit. When you have a decision, do you say, “Am I making this? Am I going to figure out who the right person to make this is? Am I clearing the space above me so that below me people can make decisions and move forward? Am I agreeing with my peers that somebody is an expert and is capable of making this decision? Or am I just going to make this decision and be explicit that it is mine and mine alone?” I think some people get really shy about just saying, “Hey, this is mine. This one’s me.”

I’ve spent a lot of wasted time doing roadmap planning, a lot of wasted time talking about decisions, when in the end there was one person who was going to make the whole decision anyway and they were just kind of trying to get everyone to go along with them. So I think being explicit and being straightforward and then looking for being explicit to say, “No, this is not my decision. This is your decision.”

Shane Hastie: As a team member, when I’m uncomfortable with that decision, how do I put my hand up?

Nick van Wiggeren: This is where a lot of that safety and security comes from. So what we want to look for in a team and what I try to create is the group of people who are all cheering for each other’s success. So one person isn’t saying, “Oh, Joe always gets the fun projects” or, “Maria, she’s always the one that gets to make this decision.” You want that person to be able to be totally truthful and honest with you as a leader and with the rest of their team and saying, “Hey, I’m struggling with this. I’m not sure.” Or even being very explicit and saying, “I can’t make this decision alone. I’m missing data. I think it needs to be you.” And in that case, that shouldn’t be viewed as failure. That should be viewed as open, honest, and transparent communication, but the team has to be ready for that. They have to be ready to really work through what that means for them and not just say, “Not my problem.”

Shane Hastie: So again, it’s coming to this team culture. How do we build that team culture?

Building a great team culture starts with hiring and leadership [06:20]

Nick van Wiggeren: It starts at hiring and it starts at leadership. So you’ve got to hire people that fit in with the goals of the company. And I’m not talking about any kind of technology fit or they’ve got to have this many years of experience, or they’ve got to be residing in this country. I’m talking about building a team of people that are prepared to bring their selves to work and are prepared to kind of dig in and do what that means and a leadership group that are willing to give them that kind of strategic level to get to know each other and to get to make decisions.

And again, I think what we so often find is people talk the talk here. They talk about empowerment, autonomy, delegation, bringing your full self to work, and a lot of these kind of buzzwords that get passed around, but actually doing it requires leaders to be probably the most uncomfortable person in the room. They have to cede control. They have to make the room for other people to make decisions that they would to get the credit that they would, to lead from behind instead of leading from in front and to own the mistakes, but celebrate the successes of others as well.

Shane Hastie: If only it was easy.

The value of making your culture intent visible [07:20]

Nick van Wiggeren: Hey, I mean, I agree with you. And I think when you find that groove and when you find that you can build that culture, it gets easier and easier every day to grow. It gets easier and easier every day to add that in because that becomes the norm. And that’s what I think. You see really good companies that have scaled their culture and kept their culture as they’ve doubled and doubled and doubled. What they’ve been able to do is they’ve been able to start with that nugget. And many people have talked about Netflix Culture Deck is a great example of this. Whether you like that culture or not, they were true, honest, and open about it, and they were able to scale it far past where most engineering organizations even have a pulse of one culture.

Shane Hastie: So making your cultural intent visible.

Nick van Wiggeren: Yep. We talk a lot here at PlanetScale about our values. We don’t actually talk that much about our actual values. We don’t brag about them. There’s no kind of public page about them. But we talk about how our values should turn an equal amount of people or at least some people off of the company. They should disagree with them just as many people agree with them. They’re not values if everyone says, “Yeah, those sound great.” If your value is, “Be good. Be great. Do good work. Come to work prepared to be a good teammate,” no one disagrees with those values. They have to be opinionated and they have to be something that people opt into so that they actually do define you. Otherwise, again, it’s a buzzword that people are going to throw around and it doesn’t have a lot of value.

Shane Hastie: There’s a lot of change happening around us in engineering today, in the world today. How do we keep that core of stability in an environment of uncertainty at best?

Nick van Wiggeren: Yeah, I’ve been a remote worker now since 2017, so I’ve been leading remote teams even before that, I’ve always kind of worked at remote first companies. But when you combine the explosion of that with the kind of economic uncertainty that tech has faced, the ability to be a strong and present and calming leader is more important than ever. You’ve got to be able to reach into people’s homes now. You’ve got to be able to reach into in a way that just didn’t need to when you were in an office and you could swing by and check on someone.

And I think you have to be really cognizant, not just of your words, but how your words are perceived by others, making sure you’re really checking in and building the structure that was maybe taken for granted beforehand. I know I spend a lot of my time really now trying to get into the psyche of people a little bit more, really trying to make sure they feel connected to the company because ultimately as things change quickly, as the days of easy money are slightly more gone, as AI comes in and starts to make people wonder what the future of software development looks like, what they need to feel is they need to feel trust in their leadership and they need to feel like, again, they feel safe and have someone to talk to.

Shane Hastie: What does the future of software development look like?

Nick van Wiggeren: I wish I knew. If I knew, I think my title would be founder and it would be at a domain that ended in Ai. I’m actually more on the side of, everything’s going to change, but everything’s going to stay the same. We’ve had people writing code now for decades and decades and decades. The internet got added in. We added IDEs, we added relational databases, we added JavaScript. You name it, right? And yet here we are still writing code in Vim. So I don’t know if it’s going to change that much, maybe people will be more productive, but I know it is going to change.

Shane Hastie: In that changing environment, how do leaders cope? How do leaders look after themselves?

Leaders need to take care of themselves too [10:39]

Nick van Wiggeren: It’s lonely. And I’ll stand up here and I’ll say it’s all about emotional regulation, emotional balance, and a little bit of self-discovery. I’ll say this as someone who’s not always very good at that, who sometimes gets a little bit too caught up and who sometimes gets a little bit too focused on work and doesn’t take time for himself. But I think ultimately you’ve got to model the same things that a good leader tells other people. You’ve got to take time off and rest when you need it. You’ve got to understand why you show up to work every day, what you want your team to be like and how you want your team to behave. And then you have to model that exact same behavior. No leader wants to stand up in front of his or her team and say, “You know what, folks? I need a week off. I’m having a hard time.” But that’s what it takes, especially again, especially in remote world where you can be out at the beach, you can be out at dinner and slack on your phone is still telling you about something going on.

The ability to wall yourself off from work and the ability to separate your identity is more difficult than ever, especially for leaders who are feeling responsible, but more important than it’s ever been.

Shane Hastie: And building on that, how do we help our teams thrive again in this environment?

Helping people thrive under uncertainty and pressure [11:43]

Nick van Wiggeren: I think it’s a lot of little things like we’ve been talking about, but I think one of the big things that I’ve been focusing on is just really, really, really keeping an eye out for the kind of events that get people stressed out and the kind of events that really bug people. So as an infrastructure company, we don’t have the privilege of not having a really, really, really focused on-call rotation. We host things that a minute of downtime may cause hundreds of thousands of dollars of damage for customers, a minute of downtime is something people notice. When they log off, their customers don’t, and they’re trusting PlanetScale to keep that running. And so I spend a lot of my time looking at metrics and data to make sure people aren’t getting paged in the middle of the night too much, to make sure people are taking the appropriate amount of vacation, to make sure that people aren’t letting themselves get to a spot where they need more than a week of vacation.

And so I think it’s a lot of things like that, really working backwards from what’s a healthy team look like? What’s a healthy work-life balance look like? And sometimes forcing or nearly forcing people and saying, “I’m taking you off the rotation. I’m taking the pager for a night. Go get some rest” all the way over to, you’re taking a week off all the way over to, “I’m adding a second person to this project. I understand it’s not because you’re bad, but because we need that second person to bounce ideas off of or to kind of help get to the other side.”

Shane Hastie: So Nick, when we were preparing the InfoQ Culture Trends report for this year, we identified a few trends that we are seeing around that people stuff. We spoke about humanistic workplaces, we spoke about systemic and leadership coaching as two things we saw as coming in the innovator space. How would you see those?

The value of leadership coaching [13:17]

Nick van Wiggeren: Awesome. I think leadership coaching is really just completely undervalued by tech from kind of beneath the CEO. You hear a lot about people with CEO coaches and things like that. I was extremely fortunate very early in my management career that the first company that kind of made me a people manager, which is really I think a huge journey both career-wise and personal wise to be responsible for other people at work. But they invested nearly immediately in leadership coaching in kind of some of that more formal methods of learning how to do that.

And it both propelled my career in a massive way, kind of giving me the tools I need to understand how my words impact other people, the tools I need to understand what people need from a leader. As well as for me personally, the kind of space to navigate a lot of the components of all of a sudden being thrust into running a team and being responsible for a culture and being responsible for all of what that entails. Everything from people misconstruing your words and having a hard time all the way over to, again, that ability to kind of admit when you’ve made a mistake and move forward.

So I think management as a discipline is something that a lot of tech companies don’t invest in, right? You put the best engineer as a people manager. You put the person who steps up as the people manager. I joke one of my most important qualities when I first became a manager was that I had good handwriting so I would take notes. And so I was always the most organized. Is that really a qualifying piece of being a good people manager? No, but I do have very good handwriting and I can write really well on a whiteboard. But all of that training and all of that kind of formal, not forced, but really helpful growth is what turned me into the people manager that I am today and really helped me along the way build up that confidence, and I think helped all of the teams that I manage.

I made plenty of mistakes, but I think they were probably 10 times less bad because I was kind of able to work through them and have someone who I could bounce ideas off of and work with. I’m a big fan of that. It’s a big time commitment. It’s a dollar commitment. But if you care about fostering real leadership, those are the kinds of things that a workplace has to focus on.

Shane Hastie: We need to invest in people, not just in their technical skills.

You can’t have non-technical managers in technical teams [15:21]

Nick van Wiggeren: And I have a hot take there too, of course. You can’t have non-technical managers. I think a lot of companies see management as its own discipline, something you can be really good at without any other skills. You have to be able to bring a hybrid of both, but you’re always going to be stronger in one than the other and you always need to invest in both of them. And I do think that people systemically undervalue the leadership side of it and assume that the more technical you are, the rest will kind of shake out from there.

Shane Hastie: Another trend that we saw was responsible tech. We see all of the climate change impact and so forth. Where are we headed there?

Responsible technology and climate impact [15:56]

Nick van Wiggeren: I’ve actually been looking a lot more especially at climate tech lately. I’m a big believer in technology. I’m a big believer in technology solving problems, and I really do think that the tech industry as a whole maybe could focus a little bit more on bringing that tech to solving some of our largest problems. So especially around energy, especially around what generative AI can just bring to productivity as a whole. I think we might have a real moment here of companies focusing on making money of course and making profit. There’s always money in almost any field, but really leveraging that to build the future of what the world needs. Because I think if tech just ignores that and if tech focuses on selling ads, if tech focuses on making just social media faster, that’s fine. Again, there’s money in advertising, there’s money in social media, but I’d really love to see us harness some of that ability and really focus on solving these big society level problems that almost feel too big to tackle, but won’t get solved without tech.

Shane Hastie: How do we get there?

Nick van Wiggeren: I ask myself this a lot, and I think it’s one step at a time. I think that this is something I’ve learned throughout my career. I remember the first famous person I met, I don’t even remember who it was, but I remember thinking to myself the biggest thing I walked away with was that they were just a person just like you and just like me. And I think that we need to do the same thing for solving climate change. We need to do the safe thing for solving world hunger, right? Find the most useful thing that you can do, find the most important thing that you can do to chip away at it and go 100% full speed at chipping away at it.

No one built a company trying to tackle an entire segment. No one built a company trying to tackle all of climate change. But if you can find a vertical, if you can find a niche, if you can find an area and a thousand people, a hundred thousand people can go do that, we can all run at the problem together. So I encourage people, don’t try and solve the whole thing. Don’t try and bring world peace. You might get a nice seed round, and a little bit further it’s a great idea, but build a company around a sustainable piece of the whole puzzle and then just keep executing, keep growing, and keep compounding and we’ll get there. We have to get there, but don’t do it all.

Shane Hastie: Nick, some really interesting points and good advice in there. If people want to continue the conversation, where do they find you?

Nick van Wiggeren: I’m on Twitter as @NickVanWig. Feel free to tweet at me. I will tweet you back. I’m also on LinkedIn as well. Feel free to add me on LinkedIn for the workplace social media. But I’m happy to talk about all of this. I think the technology industry has been a force of nature in the world the last couple decades. And again, I think if we want to keep going and kind of keep making the society level progress that we need to, we’ve got to let tech maybe not lead the way, but be part of the pack leading the way.

Shane Hastie: Thank you so much.

Nick van Wiggeren: Thank you. It’s been a pleasure.

Mentioned

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. Announces Date of Second Quarter Fiscal 2024 Earnings Call – MarketScreener

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, Aug. 10, 2023 /PRNewswire/ — MongoDB, Inc. (NASDAQ: MDB) today announced it will report its second quarter fiscal year 2024 financial results for the three months ended July 31, 2023, after the U.S. financial markets close on Thursday, August 31, 2023.

MongoDB

In conjunction with this announcement, MongoDB will host a conference call on Thursday, August 31, 2023, at 5:00 p.m. (Eastern Time) to discuss the Company’s financial results and business outlook. A live webcast of the call will be available on the “Investor Relations” page of the Company’s website at http://investors.mongodb.com. To access the call by phone, please go to this link (registration link), and you will be provided with dial in details. To avoid delays, we encourage participants to dial into the conference call fifteen minutes ahead of the scheduled start time. A replay of the webcast will also be available for a limited time at http://investors.mongodb.com.

About MongoDB 
Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, our developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.

Investor Relations
Brian Denyeau
ICR for MongoDB
646-277-1251
ir@mongodb.com

Media Relations
MongoDB
communications@mongodb.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/mongodb-inc-announces-date-of-second-quarter-fiscal-2024-earnings-call-301897687.html

SOURCE MongoDB, Inc.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Database Market Size, Growth | Latest Inclinations, TOP Players Revenue, Industry Demand Analysis

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

PRESS RELEASE

Published August 10, 2023

Database Market

LATEST Report of Database Market 2023-2030 | Segmentation by: Types, Applications, Players, Regions

Database Market Insights 2023:

TOP Dynamic players in the Database Market (Alibaba, Google, MongoDB, Rackspace Hosting, Cassandra, IBM, Salesforce, Teradata, Oracle, Microsoft, Tencent, Couchbase, SAP, Amazon Web Services) are driving the market’s growth in the ICT sector during the forecast period of 2023-2030. The Market Development Report has segmented the global Database market report based on Type (On-premises, On-demand) and Application (Small and Medium Business, Large Enterprises).

Updated Report of {107Pages}

Get a Sample PDF of report:https://www.marketgrowthreports.com/enquiry/request-sample/23359699

Customer Attention:

  1. How will you analyse the competition analysis between top key players included in the report?

With the aim of clearly revealing the competitive state of the industry, we concretely analyse not only the leading plyers that have a voice on a global scale, but also the regional small and medium-sized players that play key roles and have plenty of potential growth.Key players in the global Database market are covered in Chapter 3 and 8:

  • Alibaba
  • Google
  • MongoDB
  • Rackspace Hosting
  • Cassandra
  • IBM
  • Salesforce
  • Teradata
  • Oracle
  • Microsoft
  • Tencent
  • Couchbase
  • SAP
  • Amazon Web Services

Brief Picture About Database Market:

Market Overview of Global Database market:
According to our latest research, the global Database market looks promising in the next 5 years. As of 2022, the global Database market was estimated at USD Million, and it’s anticipated to reach USD Million in 2028, with a CAGR during the forecast years.

Databases are used to store and manage various forms of data generated by a company. Database services can be provided on-premises or on-demand. On-demand services are known as cloud-based databases, which is gaining increasing acceptance among several organizations. A cloud-based database is suitable for organizations that require immediate access to database services, easy scalability options, low cost, and low maintenance. Service providers offer end-to-end solutions, which help organizations focus on their core business areas.

This report covers a research time span from 2018 to 2028, and presents a deep and comprehensive analysis of the global Database market, with a systematical description of the status quo and trends of the whole market, a close look into the competitive landscape of the major players, and a detailed elaboration on segment markets by type, by application and by region. (Ask for Sample of Report)

  1. Does this report consider the impact of COVID-19 and the Russia-Ukraine war on the Database market?As the COVID-19 and the Russia-Ukraine war are profoundly affecting the global supply chain relationship and raw material price system, we have definitely taken them into consideration throughout the research, and in Chapters 1.7, 2.7, 4.1, 7.5, 8.7, we elaborate at full length on the impact of the pandemic and the war on the Database Industry.

To Know How Covid-19 Pandemic and Russia Ukraine War Will Impact This Market- REQUEST SAMPLE

  1. What are the major applications and type of Database Market?

On the basis of product typethis report displays the production, revenue, price, market share and growth rate of each type, primarily split into:

  • On-premises
  • On-demand

On the basis of the end users/applicationsthis report focuses on the status and outlook for major applications/end users, consumption (sales), market share and growth rate for each application, including:

  • Small and Medium Business
  • Large Enterprises

You will get detailed information regarding types and applications in Chapter 4,5 and 6

  1. What is a major information sources?Both Primary and Secondary data sources are being used while compiling the report.
    Primary sources include extensive interviews of key opinion leaders and industry experts (such as experienced front-line staff, directors, CEOs, and marketing executives), downstream distributors, as well as end-users. Secondary sources include the research of the annual and financial reports of the top companies, public files, new journals, etc. We also cooperate with some third-party databases.

    Please find a more complete list of data sources in Chapters 11

Inquire more and share questions if any before the purchase on this report at-https://www.marketgrowthreports.com/enquiry/pre-order-enquiry/23359699

This Database Market Research/Analysis Report give Answers to following Questions:

  • How Porter’s Five Forces model helps you to study Database Market?
  • What Was Global Market Status of Database Market? What Was Capacity, Production Value, Cost and PROFIT of Database Market?
  • What is the major industry objective of the report? What are the critical discoveries of the report?
  • What are the TOP 10 KEY PLAYERS of Database Market?

Get a Sample PDF of report – https://www.marketgrowthreports.com/enquiry/request-sample/23359699

Detailed Table Of Content of Global Database Market Insights and Forecast to 2030

1 Market Overview

1.1 Product Overview and Scope of Database

1.2 Classification of Database by Type

1.2.1 Overview: Global Database Market Size by Type: 2017 Versus 2022 Versus 2030

1.2.2 Global Database Revenue Market Share by Type in 2022

1.3 Global Database Market by Application

1.4 Global Database Market Size and Forecast

1.5 Global Database Market Size and Forecast by Region

1.6 Market Drivers, Restraints and Trends

1.6.1 Database Market Drivers

1.6.2 Database Market Restraints

1.6.3 Database Trends Analysis

2 Company Profiles

2.1 Company

2.1.1 Company Details

2.1.2 Company Major Business

2.1.3 Company Database Product and Solutions

2.1.4 Company Database Revenue, Gross Margin and Market Share (2019, 2020, 2022 and 2023)

2.1.5 Company Recent Developments and Future Plans

3 Market Competition, by Players

3.1 Global Database Revenue and Share by Players (2019,2020,2022, and 2023)

3.2 Market Concentration Rate

3.2.1 Top3 Database Players Market Share in 2022

3.2.2 Top 10 Database Players Market Share in 2022

3.2.3 Market Competition Trend

3.3 Database Players Head Office, Products and Services Provided

3.4 Database Mergers and Acquisitions

3.5 Database New Entrants and Expansion Plans

Get a Sample PDF of report –https://www.marketgrowthreports.com/enquiry/request-sample/23359699

4 Market Size Segment by Type

4.1 Global Database Revenue and Market Share by Type (2017-2023)

4.2 Global Database Market Forecast by Type (2023-2030)

5 Market Size Segment by Application

5.1 Global Database Revenue Market Share by Application (2017-2023)

5.2 Global Database Market Forecast by Application (2023-2030)

6 Regions by Country, by Type, and by Application

6.1 Database Revenue by Type (2017-2030)

6.2 Database Revenue by Application (2017-2030)

6.3 Database Market Size by Country

6.3.1 Database Revenue by Country (2017-2030)

6.3.2 United States Database Market Size and Forecast (2017-2030)

6.3.3 Canada Database Market Size and Forecast (2017-2030)

6.3.4 Mexico Database Market Size and Forecast (2017-2030)

7 Research Findings and Conclusion

8 Appendix

8.1 Methodology

8.2 Research Process and Data Source

8.3 Disclaimer

9 Research Methodology

10 Conclusion

Continued…

Reasons to buy this report:

  • To get a comprehensive overview of the Polling Software Market
  • To gain wide ranging information about the top players in this industry, their product portfolios, and key strategies adopted by the players.
  • To gain insights of the countries/regions in the Polling Software Market.

Purchase this report (Price 3380 USD for a single-user license): https://www.marketgrowthreports.com/purchase/23359699

Contact Us:

Market Growth Reports

Phone: US + 1 424 253 0807

UK + 44 203 239 8187

E-mail:[email protected]

Web:https://www.marketgrowthreports.com

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Database Market Size, Growth | Examination Forecast [2023-2030] | Latest Inclinations, TOP Players Revenue, Industry Demand Analysis

TheExpressWire

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB) Stock Sinks As Market Gains: What You Should Know – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (MDB) closed the most recent trading day at $359.66, moving -0.09% from the previous trading session. This change lagged the S&P 500’s 0.03% gain on the day. Elsewhere, the Dow gained 0.15%, while the tech-heavy Nasdaq added 0.12%.

Prior to today’s trading, shares of the database platform had lost 7.95% over the past month. This has lagged the Computer and Technology sector’s gain of 0.11% and the S&P 500’s gain of 1.66% in that time.

MongoDB will be looking to display strength as it nears its next earnings release. On that day, MongoDB is projected to report earnings of $0.45 per share, which would represent year-over-year growth of 295.65%. Meanwhile, our latest consensus estimate is calling for revenue of $389.93 million, up 28.41% from the prior-year quarter.

Looking at the full year, our Zacks Consensus Estimates suggest analysts are expecting earnings of $1.51 per share and revenue of $1.54 billion. These totals would mark changes of +86.42% and +19.78%, respectively, from last year.

Investors should also note any recent changes to analyst estimates for MongoDB. Recent revisions tend to reflect the latest near-term business trends. As such, positive estimate revisions reflect analyst optimism about the company’s business and profitability.

Based on our research, we believe these estimate revisions are directly related to near-team stock moves. Investors can capitalize on this by using the Zacks Rank. This model considers these estimate changes and provides a simple, actionable rating system.

Ranging from #1 (Strong Buy) to #5 (Strong Sell), the Zacks Rank system has a proven, outside-audited track record of outperformance, with #1 stocks returning an average of +25% annually since 1988. The Zacks Consensus EPS estimate remained stagnant within the past month. MongoDB currently has a Zacks Rank of #2 (Buy).

Valuation is also important, so investors should note that MongoDB has a Forward P/E ratio of 238.61 right now. This represents a premium compared to its industry’s average Forward P/E of 38.26.

The Internet – Software industry is part of the Computer and Technology sector. This group has a Zacks Industry Rank of 87, putting it in the top 35% of all 250+ industries.

The Zacks Industry Rank gauges the strength of our individual industry groups by measuring the average Zacks Rank of the individual stocks within the groups. Our research shows that the top 50% rated industries outperform the bottom half by a factor of 2 to 1.

Make sure to utilize Zacks.com to follow all of these stock-moving metrics, and more, in the coming trading sessions.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Challenges of AI Product Development

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Developing artificial intelligence (AI) products involves creating models and feeding data to train them, testing the models, and deploying them. Software engineers can support the adoption of AI and machine learning (ML) in companies by building an understanding of the technologies, encouraging experimentation, and ensuring compliance with regulations and ethical standards.

Zorina Alliata spoke about AI product development at OOP 2023 Digital.

To create AI products such as forecasting software or recommendation engines, we have to create models based on patterns in historical data, Alliata explained. To develop these models, we use development techniques that are different from regular software development. For example, there are a lot of unknowns, iterative processes, and mysteries to be found when analysing the data, Alliata said.

According to Alliata, the machine learning process is based on the following steps:

  • Feed data into an algorithm
  • Use this data to train a model
  • Test and deploy the model
  • Consume the deployed model to do an automated predictive task

Data is extremely important, Alliata argued. The algorithms require a lot of data to learn patterns from. Having enough data, clean data, fair and trustworthy data alone is a new level of processing that we did not do to this extent in the past, she said.

The result of product development, the model, is a series of algorithms that identify various information in the ocean of data, and most of the time the data scientists have to try several algorithms to see which one works best in each use case, Alliata mentioned. This introduces the need to iterate and try various approaches, so team leads should understand that they need to allow enough time during the modeling phase.

Alliata said that once an AI product is delivered, it needs constant care and monitoring as well, to make sure it still performs optimally as patterns might change. Occasionally, the model will need re-training so it can learn from the newer data provided by consumers, as well as from feedback of its own behavior and performance.

Software engineers can contribute to the adoption of AI and ML in their companies by gaining an understanding of these new technologies and their specific challenges, Alliata said. Software engineers can also help to create an environment that encourages experimentation and learning, and provide guidance on best practices for AI development, she added.

Additionally, software engineers can help to ensure that ML models are compliant with relevant regulations and ethical standards. Setting standards and a clear operating model will enable better communication and collaboration between all teams, technical and business, Alliata concluded.

InfoQ interviewed Zorina Alliata about AI product development.

InfoQ: How do AI transformations relate to agile?

Zorina Alliata: AI transformations relate to agile in that they both involve a process of transition. Agile leaders can play an important role in AI transformations by promoting lean budgeting, agile teams and teams of teams, agile delivery that fails fast, and specific reports to show value delivered.

Agile leaders bring value to the AI transformation by using their Agile expertise in managing training schedules and content, promoting technical excellence, checking for compliance/bias/fairness features, and proposing changes as needed to the current processes to enable scalability.

Agile leaders also know how to deliver correctly and on time, create metrics for important KPIs and trends, and provide visibility into the work. All these skills are very useful and needed during an AI transformation.

InfoQ: What have you learned from AI product delivery?

Alliata: There is a possibility of data being altered from the future – something I found out the hard way! This happens, for example, when we apply a data fix and inadvertently change old records, even slightly. Then we train the ML model on that old data, expecting that it had captured the state at the time it was recorded, while in fact the data has been changed.

Then there is the infrastructure – you need to train the model, then release the model, then keep it updated. The environment and the tools you use to write ML models and monitor ML models have to be compliant with your company’s security standards and regulatory requirements. The infrastructure is different for AI and ML products, and it will require some investments up front, as well as specialised supporting roles such as Machine Learning engineers.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


$1000 Invested In MongoDB 5 Years Ago Would Be Worth This Much Today – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB MDB has outperformed the market over the past 5 years by 31.51% on an annualized basis producing an average annual return of 40.99%. Currently, MongoDB has a market capitalization of $25.26 billion.

Buying $1000 In MDB: If an investor had bought $1000 of MDB stock 5 years ago, it would be worth $5,680.74 today based on a price of $357.82 for MDB at the time of writing.

MongoDB’s Performance Over Last 5 Years

Finally — what’s the point of all this? The key insight to take from this article is to note how much of a difference compounded returns can make in your cash growth over a period of time.

This article was generated by Benzinga’s automated content engine and reviewed by an editor.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Sees Unusually High Options Volume (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of some unusual options trading activity on Wednesday. Investors purchased 36,130 call options on the company. This represents an increase of approximately 2,077% compared to the typical volume of 1,660 call options.

Analysts Set New Price Targets

MDB has been the subject of several research reports. VNET Group restated a “maintains” rating on shares of MongoDB in a research report on Monday, June 26th. Morgan Stanley raised their price objective on shares of MongoDB from $270.00 to $440.00 in a research report on Friday, June 23rd. KeyCorp upped their price objective on shares of MongoDB from $372.00 to $462.00 and gave the company an “overweight” rating in a research note on Friday, July 21st. Guggenheim lowered shares of MongoDB from a “neutral” rating to a “sell” rating and upped their price objective for the company from $205.00 to $210.00 in a research note on Thursday, May 25th. They noted that the move was a valuation call. Finally, JMP Securities upped their price objective on shares of MongoDB from $400.00 to $425.00 and gave the company an “outperform” rating in a research note on Monday, July 24th. One equities research analyst has rated the stock with a sell rating, three have given a hold rating and twenty have given a buy rating to the company’s stock. According to data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and a consensus price target of $378.09.

Read Our Latest Stock Report on MongoDB

Insider Activity at MongoDB

In related news, Director Dwight A. Merriman sold 3,000 shares of the stock in a transaction dated Thursday, June 1st. The shares were sold at an average price of $285.34, for a total value of $856,020.00. Following the completion of the transaction, the director now owns 1,219,954 shares in the company, valued at approximately $348,101,674.36. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through this link. In related news, Director Dwight A. Merriman sold 3,000 shares of the stock in a transaction dated Thursday, June 1st. The shares were sold at an average price of $285.34, for a total value of $856,020.00. Following the completion of the transaction, the director now owns 1,219,954 shares in the company, valued at approximately $348,101,674.36. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through this link. Also, Director Dwight A. Merriman sold 6,000 shares of the firm’s stock in a transaction dated Friday, August 4th. The shares were sold at an average price of $415.06, for a total transaction of $2,490,360.00. Following the transaction, the director now owns 1,207,159 shares of the company’s stock, valued at approximately $501,043,414.54. The disclosure for this sale can be found here. Insiders sold 102,220 shares of company stock worth $38,763,571 in the last 90 days. Corporate insiders own 4.80% of the company’s stock.

Hedge Funds Weigh In On MongoDB

A number of institutional investors have recently modified their holdings of the company. Dimensional Fund Advisors LP raised its position in shares of MongoDB by 7.6% in the second quarter. Dimensional Fund Advisors LP now owns 87,520 shares of the company’s stock valued at $35,967,000 after purchasing an additional 6,182 shares during the period. Veritable L.P. raised its position in shares of MongoDB by 1.4% in the second quarter. Veritable L.P. now owns 2,321 shares of the company’s stock valued at $954,000 after purchasing an additional 33 shares during the period. Kingswood Wealth Advisors LLC bought a new position in shares of MongoDB in the second quarter valued at approximately $257,000. Canada Pension Plan Investment Board raised its position in shares of MongoDB by 83.4% in the second quarter. Canada Pension Plan Investment Board now owns 36,700 shares of the company’s stock valued at $15,083,000 after purchasing an additional 16,690 shares during the period. Finally, Highland Capital Management LLC bought a new position in shares of MongoDB in the second quarter valued at approximately $2,824,000. Institutional investors own 89.22% of the company’s stock.

MongoDB Price Performance

NASDAQ MDB opened at $360.00 on Thursday. The company has a quick ratio of 4.19, a current ratio of 4.19 and a debt-to-equity ratio of 1.44. The stock has a market capitalization of $25.41 billion, a PE ratio of -77.09 and a beta of 1.13. The company has a 50 day moving average price of $393.69 and a two-hundred day moving average price of $287.14. MongoDB has a 12-month low of $135.15 and a 12-month high of $439.00.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Thursday, June 1st. The company reported $0.56 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.18 by $0.38. The firm had revenue of $368.28 million during the quarter, compared to analysts’ expectations of $347.77 million. MongoDB had a negative net margin of 23.58% and a negative return on equity of 43.25%. The company’s revenue was up 29.0% compared to the same quarter last year. During the same period in the previous year, the company earned ($1.15) earnings per share. As a group, equities analysts predict that MongoDB will post -2.8 earnings per share for the current fiscal year.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DataStax Adds Vector Search To Astra DB And DataStax Enterprise – I Programmer

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

DataStax has announced support for vector search on Astra DB and DataStax Enterprise, opening the option for storing data as vector embeddings to support uses including generative AI applications like those built on GPT-4.

Astra DB is DataStax’s NoSQL cloud database that is built on Apache Cassandra, while DataStax Enterprise (DSE) is the equivalent in-house, self-managed data platform that is also built on Cassandra, and is aimed at use by companies who want to keep their data in-house.

datastax

The new vector search facility is available in Astra DB for Google Cloud, Microsoft Azure and Amazon Web Services (AWS) and in DataStax Enterprise for on-premises databases. It is also due to be added to the next release of Apache Cassandra.

Patrick McFadin, Vice President Developer Relations, DataStax, said:

“We try to keep our code base as close to each other for OSS Cassandra, DSE and Astra. DSE and Astra in this case are slightly ahead of the Cassandra 5 release with these features, but will be in parity when Cassandra 5 ships.”

Cassandra 5 is expected later this year.

Vector search is a way of retrieving data that uses semantic meaning and similarity rather than specific keywords. The addition of support for vector search opens the option of querying of large volumes of unstructured data like text, audio, images, and videos using semantic meaning and can be used to uncover hidden relationships and patterns.

The technique involves creating a numeric index representing the data, then storing it in a way that lets developers ask “Given one thing, what other things are similar?” Cassandra’s developers plan to use Lucene’s Hierarchical Navigable Small World (HNSW) library, which they describe as the best ANN (approximate nearest neighbor) algorithm for Java, saying it provides a fast and efficient solution for finding approximate nearest neighbors in high-dimensional space. Cassandra also has a search mechanism called Storage Attached Indexes (SAI) that allows for different search implementations. 

The DataStax team modified this and used it along with Lucene HNSW for Astra DB and DBE’s indexing and query syntax.

DataStax also plans to use Cassio, the open source framework which was developed to integrate generative AI and ML into Cassandra, to provide a way to integrate vector search into applications. Cassio is a Python library that simplifies the task of using vector search with Cassandra, and the DataStax developers plan to use it as an interface between their software and GenAI libraries such as LangChain.

The DataStax team says the new facilities will provide users with better search accuracy and more relevant search results, including finding hidden relationships and patterns that traditional keyword searches might miss.

The vector search also means that  Astra DB and DataStax Enterprise can perform similarity calculations and ranking directly within the database, eliminating the need to transfer large amounts of data to external systems.

The vector search capacities are available in Astra DB and DataStax as a developer preview now.

iloveai

Alongside the introduction of the vector search, DataStax is running a virtual “I Love AI” event that is designed to unlock the power of Generative AI for application architects, software developers, practitioners and CTOs. The event takes place on August 23rd with two sessions timed to cater for the USA and for Europe Asia and is designed to give insights into the data platform and AI solutions you need, delivered by experts with real-world experience making AI a reality. Topics will include how to build Generative AI apps with scale, governance and data security, and ways to overcome the biggest obstacles keeping Gen AI from being enterprise ready. Registration is open here.

 datastaxsq

 

More Information

Vector Search Developer Preview

I Love AI Event

Related Articles

DataStax Astra DB gets Change Data Capture

DataStax Extends Stargate

DataStax Adds gRPC To Stargate For Performant Microservices

Cassandra 4.1 Focuses On Pluggability

Cassandra 4 Improves Performance

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB CEO Wants To Know Your Problems | Investor’s Business Daily

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

If Dev Ittycheria hears a strange knock or ping when driving his car, he’s not the kind of guy to ignore it. The MongoDB CEO prefers to get trouble checked out right away.




X



“Except for wine, nothing gets better over time,” said Ittycheria, president and chief executive of MongoDB (MDB), a New York City-based cloud database firm.

Similarly, when he hears bad news from his employees, he may assume it’s much worse than what they tell him. He doesn’t shrug it off or think it’ll take care of itself.

“When you hear bad news but don’t do anything about it, it doesn’t just go away,” he said. “Good news will find me anywhere. Bad news travels very slowly up the organization. It’s human nature.”

Seek Out Bad News

To illustrate the danger of brushing off bad news, Ittycheria poses a hypothetical. A staffer makes a sales forecast based on the best available information. But the staffer’s boss knows a major deal that’s in the works will fall apart in the coming weeks, ruining the forecast.

“The boss decides not to say anything until near the end of the quarter,” Ittycheria said. A better response is to face reality right away and address it head-on rather than let problems fester.

Ittycheria, 56, takes the same decisive approach with personnel decisions. Soon after the company’s initial public offering in 2017, he grew concerned with a key executive’s poor performance.

“The easy decision would be, ‘I can’t fire this person right now just after going public,'” Ittycheria said. “But I was worried about how this person’s leadership style would drive other employees to leave. It was penalizing all the good people.” Ittycheria let the executive go.

He credits a longtime mentor, Steven Walske, with enhancing his leadership skills. A top tech investor and former chief executive, Walske advised Ittycheria to tackle problems sooner, not later.

Ittycheria, who became MongoDB’s CEO in 2014, took Walske’s advice to heart. When conducting spot checks with his roughly 4,600 employees, Ittycheria doesn’t want to hear about what’s going well. He’s eager to learn what’s not working. “Success is evident to everyone,” he said.

In staff meetings, he likes to discuss problems. He welcomes people who raise thorny issues and encourages them to share their concerns. He models receptivity so they don’t fear a shoot-the-messenger response.

“They leave the room feeling comfortable talking about problems, not successes,” he said. “They know we don’t kick the can down the road. That’s our culture.”

Tap Solid Results Like The MongoDB CEO

Ittycheria’s method might be a bit unusual. But there’s no arguing with the results.

MongoDB’s shares have rocketed more than 1,100% since the company’s first day of trading in October 2017. That easily tops the S&P 500’s gain during that time of 76%.

The company’s fundamentals are impressive, too. The company pulled off an adjusted profit of $64.8 million, or 81 cents a share, in fiscal 2023. That reversed a loss in 2018. And revenue hit $1.3 billion in fiscal 2023, up 731% or a blistering 53% annualized from 2018.

Harness Speed And Quality For Best Results

To maintain the company’s edge, Ittycheria emphasizes the need for speed and quality work. Yet despite his fondness for speed, Ittycheria avoids rushing to judge what he hears from his team.

If he disagrees, he doesn’t interrupt to impose his views. Instead, he poses questions such as, “What do you see that I don’t?” or “Why do you think that?”

“My old self might cut them off and think, ‘You’re wrong,'” he said. “Now I want to encourage different points of view and learn from them. The most important thing a leader can say is, ‘I was wrong.'”

In terms of quality, he knows that top performers breed more collective excellence. “Great people want to work with other great people,” he said.

He’s also keen to distinguish between keeping busy and making an impact. Assembling a workforce that’s a beehive of activity doesn’t necessarily mean they’re delivering superior results.

Instead, Ittycheria prefers to evaluate each person’s performance based on impact.

“We establish a clear set of company objectives for the year,” he explained. “These may include specific projects we’re working on, partnerships we want to build and new customers we want to acquire. Then we work backwards and ask, ‘How do we meet those objectives?’ and ‘Does your work tie to one or more of our objectives?'”

Through ongoing operational reviews, Ittycheria and his leadership team track employees’ progress against their specific goals. They analyze to what extent an individual’s actions produce the desired impact — and suggest ways they might modify their approach or work habits to make a greater impact.

“The tactics can change,” he said. “But the goals don’t change.”

MongoDB CEO Fosters Trust Through Vulnerability

Some leaders hesitate to admit they’re confused or unaware of what’s going on at their organization. But Ittycheria doesn’t pretend to have all the answers.

After launching his first tech company in 1998, he found himself in the CEO role for the first time. Reflecting on that experience, he admits that he initially felt “a little impostor syndrome.”

He soon realized what he calls “the value of being vulnerable.” Rather than radiate know-it-all authority, he freely acknowledged what he didn’t know.

“I can’t know everything about everything,” he said. “The more candor there is, the better.”

He has harnessed vulnerability to build trust and create a culture of openness at the tech companies he’s run. At MongoDB, he’s particularly attuned to clarifying what people tell him by expressing his confusion.

In meetings, he might say, “I’m a bit lost. I don’t understand the point you’re making.”

As a result, he gathers valuable information and insight. He may discover that he lacked the proper context to understand an employee’s point. Or the staffer may rephrase a point to make it clearer and prevent misunderstanding.

Better yet, he models how he wants others to show vulnerability. They’re more comfortable admitting when they don’t understand something if they see their CEO do the same.

MongoDB CEO Leads With Intense Focus

Ittycheria’s willingness to speak up and seek clarification if he’s confused reflects his intellectual curiosity. To stay a step ahead and lead the company in a fast-moving industry, he needs to direct his energy on listening and learning.

“Dev operates with a sense of urgency,” said Chip Hazard, general partner at Flybridge, an early-stage venture capital firm. “That comes through the moment you meet him.”

Hazard, who met Ittycheria in 2002, has served on MongoDB’s board of directors since 2009. He admires Ittycheria’s ability to channel his energy in multiple directions.

“His consistency of focus is really impressive,” Hazard said. “He can context-switch, focusing on both short-run operational excellence and the long-term vision to build the company. One minute, he’ll focus on the company’s 10 key initiatives this year. A minute later, he can talk about its long-term vision. He keeps many simultaneous initiatives in his mind at once.”

His steady, matter-of-fact demeanor sets the right tone, Hazard adds. He communicates with his team with low-key intensity and clarity.

“He’s not a rah-rah guy or a sky-is-falling guy,” Hazard said. “He doesn’t celebrate too much when things are good and doesn’t get too down when things aren’t good. He’s constantly focused on, ‘We’re doing great. How do we get better?'”

Dev Ittycheria’s Keys:

  • President and CEO of MongoDB, a New York City-based cloud database firm.
  • Overcame: Employees’ tendency to share bad news later, not sooner.
  • Lesson: “I can’t know everything about everything. The more candor there is, the better.”

YOU MAY ALSO LIKE:

Meet The Guy Who Runs A Massive $2.7 Trillion Bond Portfolio

Be Detailed And Targeted To Raise Money

Inspirational Quotes: Columbus Short, Sam Rayburn And Others

IBD Digital: Unlock IBD’s Premium Stock Lists, Tools And Analysis Today

MarketSmith: Research, Charts, Data And Coaching All In One Place

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.