MongoDB director Dwight Merriman sells shares worth $253590 – Investing.com

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Following this transaction, Merriman holds 85,652 shares indirectly through the Dwight A. Merriman Charitable Foundation, while directly owning 1,117,006 shares. An additional 520,896 shares are held indirectly by The Dwight A. Merriman 2012 Trust for the benefit of his children. The sale was executed under a pre-established Rule 10b5-1 trading plan. InvestingPro data shows MongoDB holds more cash than debt on its balance sheet, with 24 analysts maintaining positive earnings revisions. Get access to 8 more exclusive ProTips and comprehensive analysis through the MongoDB Pro Research Report.

In other recent news, MongoDB has seen a flurry of activity from analysts and investors alike. Guggenheim upgraded MongoDB shares from Neutral to Buy, setting a price target of $300. This upgrade is based on a discounted cash flow analysis and suggests a potential upside of 22%. Guggenheim predicts MongoDB’s total revenue growth for FY26 to mirror the conservative 15% growth rate set in the past two years, potentially performing better than FY24 rather than FY25.

In a significant financial move, MongoDB issued shares and redeemed convertible notes, allowing note holders to convert their debt holdings into equity in the company. MongoDB issued 5,662,979 shares of its common stock in this transaction. This strategic move aligns with MongoDB’s financial strategies and reflects its commitment to efficient capital structure management.

Several analysts have provided varied perspectives on MongoDB’s future performance. Tigress Financial Partners maintained a Buy rating, raising its price target to $430.00. Monness, Crespi, Hardt downgraded MongoDB’s shares to Sell, citing a slowdown in growth for MongoDB Atlas (NYSE:ATCO) and the recent resignation of the CFO. Macquarie initiated coverage on MongoDB with a Neutral rating and a price target of $300, acknowledging the company’s appeal among developers, particularly for AI applications.

In Q3 2025, MongoDB reported a 22% year-over-year increase in revenue, reaching $529.4 million. This growth was consistent across its subscription revenue, which also rose by 22% to $512.2 million, and its services revenue, which saw an 18% increase to $17.2 million. These recent developments provide a snapshot of MongoDB’s current financial health and market position.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Harbor Capital Advisors Inc. Sells 2226 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Harbor Capital Advisors Inc. decreased its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 63.1% in the 4th quarter, according to its most recent filing with the Securities & Exchange Commission. The firm owned 1,301 shares of the company’s stock after selling 2,226 shares during the period. Harbor Capital Advisors Inc.’s holdings in MongoDB were worth $303,000 at the end of the most recent quarter.

A number of other large investors have also made changes to their positions in the company. Aigen Investment Management LP acquired a new position in shares of MongoDB in the third quarter valued at approximately $1,045,000. Geode Capital Management LLC boosted its stake in MongoDB by 2.9% in the 3rd quarter. Geode Capital Management LLC now owns 1,230,036 shares of the company’s stock worth $331,776,000 after purchasing an additional 34,814 shares during the period. B. Metzler seel. Sohn & Co. Holding AG bought a new position in MongoDB during the 3rd quarter worth about $4,366,000. Charles Schwab Investment Management Inc. raised its stake in shares of MongoDB by 2.8% in the 3rd quarter. Charles Schwab Investment Management Inc. now owns 278,419 shares of the company’s stock valued at $75,271,000 after purchasing an additional 7,575 shares during the period. Finally, Sanctuary Advisors LLC bought a new stake in shares of MongoDB in the second quarter valued at about $1,860,000. Institutional investors own 89.29% of the company’s stock.

Analyst Ratings Changes

MDB has been the subject of a number of research reports. Tigress Financial increased their price objective on MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a report on Wednesday, December 18th. Truist Financial reiterated a “buy” rating and issued a $400.00 price objective (up from $320.00) on shares of MongoDB in a report on Tuesday, December 10th. Robert W. Baird increased their target price on shares of MongoDB from $380.00 to $390.00 and gave the stock an “outperform” rating in a report on Tuesday, December 10th. Royal Bank of Canada lifted their price target on shares of MongoDB from $350.00 to $400.00 and gave the company an “outperform” rating in a research note on Tuesday, December 10th. Finally, JMP Securities restated a “market outperform” rating and set a $380.00 price objective on shares of MongoDB in a research note on Wednesday, December 11th. Two analysts have rated the stock with a sell rating, four have assigned a hold rating, twenty-two have assigned a buy rating and one has given a strong buy rating to the company’s stock. Based on data from MarketBeat.com, the company currently has a consensus rating of “Moderate Buy” and an average target price of $364.64.

<!—->

Check Out Our Latest Report on MDB

MongoDB Trading Up 0.9 %

MongoDB stock opened at $242.41 on Wednesday. The company has a market capitalization of $18.05 billion, a price-to-earnings ratio of -88.47 and a beta of 1.25. The stock’s 50 day moving average is $280.67 and its 200-day moving average is $269.51. MongoDB, Inc. has a 12 month low of $212.74 and a 12 month high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Monday, December 9th. The company reported $1.16 EPS for the quarter, beating analysts’ consensus estimates of $0.68 by $0.48. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm had revenue of $529.40 million for the quarter, compared to analyst estimates of $497.39 million. During the same period in the prior year, the firm posted $0.96 EPS. The company’s revenue for the quarter was up 22.3% on a year-over-year basis. As a group, equities research analysts predict that MongoDB, Inc. will post -1.86 earnings per share for the current fiscal year.

Insider Buying and Selling

In other MongoDB news, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction on Thursday, January 2nd. The stock was sold at an average price of $237.73, for a total transaction of $713,190.00. Following the transaction, the director now owns 1,117,006 shares in the company, valued at $265,545,836.38. This trade represents a 0.27 % decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available at the SEC website. Also, CAO Thomas Bull sold 1,000 shares of MongoDB stock in a transaction on Monday, December 9th. The shares were sold at an average price of $355.92, for a total value of $355,920.00. Following the completion of the transaction, the chief accounting officer now directly owns 15,068 shares of the company’s stock, valued at $5,363,002.56. This represents a 6.22 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last three months, insiders have sold 23,776 shares of company stock valued at $6,577,625. 3.60% of the stock is owned by company insiders.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Enabling Developer Productivity: Intentional Evolution of the Platform

MMS Founder
MMS Jennifer Davis

Article originally posted on InfoQ. Visit InfoQ

Transcript

Davis: I’m going to be talking about enabling developer productivity, an internal and intentional evolution of the platform. In case you were wondering or confused if my words were not clear enough in my description, this is not about Kubernetes, and it’s not about internal development platforms or internal developer platforms, but I want to talk about platforms. I’m Jennifer Davis. I’m an engineering manager at Google. I am also an author. I’m very passionate about building communities and enabling people in different ways.

Developer Productivity

The first part of this, I want to talk about developer productivity. What is it? It’s totally about lines of code, and the number of artifacts you make. It’s getting into that flow. It is? No, not at all. It’s hard to describe. Let’s think about it from a business outcome perspective. Ultimately, the business wants us to build more value, faster all the time. It’s not about an individual’s value. It’s about whatever your company is building, whether it’s a product or a service. If you’re building cars, it’s about building the whole car. If you focus on a single element of that car, the seats or the windows, you’re going to just create artificial bottlenecks that impact the whole team. You’ll be wasting your opportunities to improve. It’s not about the individual, it’s about the team and the whole.

If we look at the research coming out of the DevOps research folks from DORA, it’s been about nine years now that they’ve been doing this research, and there’s a new survey out right now that you can take and participate in. What they’ve done is identified a set of outcomes that are predicted by the team’s performance, the organization’s performance, and all of these are dictated by capabilities.

You can predict how performant a team is based on a number of things. When we think about it, it’s those components, those capabilities that define the developer experience. Those are things like, how hard is it to write code? How hard is it to maintain code? How quickly can you get feedback about your code that you’ve written? How much autonomy do you have in choosing things and selecting how you’re going to solve problems? Maybe you don’t get to say this is the problem, but you get to choose how you do those things. Those are the capabilities that can predict your performance. It’s about your developer experience.

There are hindrances to dev experience. A lot of times if you think about Conway’s Law and that organizations design the systems that will produce and deliver this, and it’s impacted by the communication structure, then those same challenges in anything that we use, we can see them. It’s one of my little hobbies, is to evaluate and look at what are the communication flaws I see and how different services are out there by what I can guess is going on within that company, based on their communication structures. In addition to just the dev experience, for me personally, or for you individually, it’s about thinking beyond ourselves. It’s something that some people have innate. It’s like, I see a problem.

I can imagine a world in which it’s better, it’s improved for more than one person, but for a whole set of folks. It’s ok if this is not something that’s innate for you, an individual’s experience does not define the DevEx of a product or a service. It’s more the set of experiences that people have. You might not care or need to care about DevEx. You might be the only one in a field. You might not have any competition. I promise you, if your service or product is valuable, someone else is going to look at your terrible experience and say, that’s an opportunity for me to do something better. You cannot care about DevEx, but ultimately you want to care about it at some point.

How do we combat this? It’s with this role called DevRel. DevRel is this interdisciplinary role that centers around people and relationships. Every company does DevRel a little different, but it has the same components of engineering and advocacy and product. It’s a way to combat releasing your org charts with your products and services. Everyone should embrace a little bit of DevRel. I spoke at QCon in London recently, and I was talking about DevRel, and I had totally made this assumption, everybody knows what DevRel is.

Some of the feedback I got, I realized I’m doing everyone a disturbance. I’m doing the exact thing that I talk about, which is making assumptions about what everybody knows. We’re all vulnerable to this, making assumptions, and assuming this is the way. Ultimately, DevRel is going to help you increase adoption, increase engagement, increase enablement, change perceptions about what your company does, but also change your perceptions about what your company does. It’s going to help you identify and drive change that better fits your product or services into the market.

I want to share some of my experience right now, what’s going on and what I do. I said I’m at Google. Really, you might think Google is this huge company. What do you have to worry about any of those? It’s not really a one, it’s more like there’s a lot of us all working at the same company. We’re enabled by different sets of tools and possibilities. My team within DevRel is an engineering team. We create samples. I get the meta of the meta in terms of DevEx. We build the platform that helps other contributors at our company and our partners to build samples. Those samples in themselves are a measure of the developer experience that our products and services have.

Once upon a time, we had a single product, App Engine, and so DevRel built samples for App Engine. Then, as Google Cloud grew, we had a split, a reorg to have focused capabilities per language, because we really care about the idiomatic language experience that every developer comes into an environment: what they’re looking for, what they’re trying to solve. If we show them a Java sample that actually follows some practices in Node, those are not going to sit well, people are going to have a bad experience. They’re going to feel friction. We reorged based on language. Then that was insufficient, we had so many products. Then we reorged based on product. A platform emerged of shared capabilities while building samples, but it’s all fractured, and things kept getting bolted on based on what people needed.

What that meant is, if a product was doing well, they would staff the DevRel team to build the samples, to build the documentation. You have an uneven set of samples in the catalog supporting customers, but customers aren’t coming around, going, I want A sample to do A thing. Sometimes that’s the case, but a lot of time it’s about a journey. I’m trying to build a website, what do I need? I’m trying to build a Pub/Sub like message delivery, how do I do this? When you have a team that’s on a volunteer basis, handling your platform management and contacting different stakeholders and managing their expectations, and then having a bunch of top-down driven initiatives as well, you’re not going to be able to accelerate as different parts of the org need different samples, you’re stuck. We were shipping our org chart.

We started thinking about, what is it that we’re actually trying to do? There’s like, step one, we want to increase value. What’s our value? It’s not just samples. It’s code that our users find valuable. Unless our users are able to be successful with these samples, their DevEx of using the platform, we’re not successful, we’re not doing something that’s meaningful. Ultimately, as humans, we want to build something that has meaning. We came up with a set of principles. What are the things that we want to minimize? What do we want to stop doing? What are we going to try to eliminate completely from the work that we’re doing that’s distracting us? We don’t want to build the wrong things. We want to minimize how much work we actually have in progress, so we can actually deliver samples to customers.

We want to stop context switching platform to staff, to our stakeholders, to actual individual samples, to friction logging. We don’t want to work on bugs. That feels weird. Why wouldn’t you fix bugs? How do you know that the samples that have bugs are actually the right set of samples? You’re cutting yourself short by focusing on something you don’t even know has value yet. Which comes to the final two really critical pieces, we weren’t learning from our mistakes. Everyone would run into a particular problem. They’d share that with the product teams, but it didn’t go across all of DevRel, and so everyone kept experiencing the same sets of challenges. Not following established practices. When it comes to samples, partly, samples are part of the individual products, but partly, samples are a product in themselves. That set of samples that you provide to people, that set of samples that support your documentation, if you have bad samples, you’re going to hinder people’s trust in your samples. You have to think about that as a whole, and you want to have a consistent voice in those samples.

The first step in thinking about your platform is identifying the value that you’re creating and those shared capabilities of the platform. What thing are you taking care of for other folks, so they don’t have to all be specialists in them? Where are we going? This is specific to my team. There is no single platform that works for everybody and everybody’s use case. It’s a journey that we all have to discover, which is part of why I’m so passionate about this subject. This is where we’re going. My hope is to enable and empower lots of contributors, including ourselves, and having a lightweight, flexible platform that enables the creation of much broader set of samples, hence the bigger purple cloud.

Platforms

I said platform, what is a platform? Kubernetes is a platform. Google Cloud is a platform. The platform is the combination of much more than just the tools and technology. It’s the tools, the technology. It’s the processes. It’s the workflows. It’s the collaboration, the communication. It’s learning and development. It’s the environment and culture that you create. Platform, of course, it includes the tools and technologies. We can think of this as, how are you solving your particular problem? If you start out trying to solve the technology challenges and just implement random tools, you’re not solving your underlying value.

You can’t just throw away what exists. If you go and architect something and just try to create the new thing, all of the things you have built in the past are not there, and people are going to get really frustrated and angry, and you’re not going to have adoption of your platform. You have to think about your constraints, what exists today. You also have to question your assumptions. I’m going to talk a little bit later about some of the assumptions we’ve made and how we’ve changed it.

Ultimately, platforms need to be lightweight and evolve. One of the constraints we have is we do all our sample development on GitHub. That might seem strange. Of course, that makes sense. You’re making it open and available to people. Internally, contributors want to use the tools they have available that they know. Why can’t we just use the Google tools? It comes with all these extra measurements. Why can’t we just do that? There are assumptions embedded in that, assumptions that we have as well. We had to question ourselves, are we using the right set of workflows? We determine, yes, we need to create these samples in the open, available for people to see, and be able to explore them and build trust.

We also need to build them here, in this external place, because that’s what our customers are doing. We are the zero of customer of understanding, how does this platform work? If we’re leveraging things internally to build stuff, we’re not seeing all the friction. We’re not experiencing the friction. We’re not getting that feedback back to the products. If it’s so bad, that is something that needs to be improved. It’s being aware of your constraints when it comes to your tools and technology, so that you know when and what to change and how to change it.

You also want to minimize human toil. It’s not making it so that I want to do the thing just because it’s hard. There’s a lot of things we can get machines to do. Some of the things that we use, for example, our GitHub Actions that allow us to label context of pull requests that come in, and basically set that information to who is the most apt and expert in that area to handle reviews on it. We also set up LinkedIn so that when something comes in, people get that fast feedback. Is there something that they need to correct before it actually makes it into the system, or before someone actually reviews it? We set and establish sets of guidelines. In addition to Google style guides for all the different languages we support, we set explicit tiered sets of responsibilities and expectations people have about our samples, and that’s to encourage people to follow and have that single voice.

Platforms include the processes and workflows. Wherever you start from, that’s where you start. You have to test what your assumptions are about those workflows. One of those I mentioned with using GitHub and working in the open. Within a team, you want to also establish some common work item vocabulary, basically set how you’re going to handle issues that come in, have a single intake, how you assess the cost of building things. That way people understand and can communicate, and you build that trust within the team. Trust is crucial for a high-performing team. If you have situations where people can question because work’s not out in the open, or there’s not transparency, or there’s not consistency, you can have distrust.

That person isn’t working on the important things that you say are important, they’re just doing their own thing. That didn’t take much time, where is the actual work and impact? By establishing a set of vocabulary about how we talk about work, it creates more confidence and understanding, as you can see over time. It’s not a, measure the performance of individuals, it’s sharing across a team how they’re performing, so people can build trust.

We also do something called friction logging, and it’s not just for products in development, but it’s also an activity that helps us to work and share information about stuff. Because in DevRel, one of the concerns is, is there like a half-life? Can you only be in DevRel for some amount of time? The truth is, I think everybody should be doing DevRel, but also being able to try things out and explore them as a zeroth customer. You get the opportunity to take on the empathy, look at the DevEx of your product or your service, explore a journey, and provide meaningful feedback to the people building that particular product or service.

It also helps when onboarding someone. We’d have a little exercise of doing a friction log as a team exercise to understand what is it that we’re making assumptions about that we shouldn’t actually assume. Maybe there’s missing knowledge that we haven’t documented. Maybe there’s practices that have evolved in the community and we’ve missed them, and this helps us instill and grow our culture. Which goes to the platform is the collaboration and communication. When you’re trying to solve something, it’s more than just you. It starts within the team. You definitely need open communication and an ability to provide feedback to each other. Because if everyone is just saying, yes, that’s great, you’re not getting that critical feedback to be better or to be able to understand different perspectives.

You need to establish that set of trust and ensure there’s not contempt happening in any way on the team, or stonewalling, not answering people’s questions. Key to this is not building an us and them across the org. Especially within DevRel, we have stakeholders across the whole organization. We can’t get into a mindset of, they just do this, and it causes us so much problems. Because the minute we get into that, we’re going to harm our ability to work with people. We have to come from a place of yes, and. There’s a component of thinking through, how do you do this, not just within your organization, but across the industry as well? The industry frames and changes things. It’s, people have choice.

People are going to take a selection of different possibilities and build things upon them. When you choose a service from here and choose a service from there, GitLab, Datadog, you don’t have the capability to say, I’m going to ignore everything else and not care. If you think about this interteam and across the industry, how we’re building our relationships and how we accomplish and solve problems, it’s really crucial to actually driving performance.

It’s about learning and development. One of the first things we did as we tackled our new team and started thinking as a holistic element of looking across the platform, not just for a specific set of products, we started thinking, we have to be able to scale. How are we going to scale? We just had a set of training where we trained everybody about, how do you update a sample? How do you submit a PR? How do you submit a change list for our documentation, so the change that you made actually shows up in the documentation? All of this is part of a normal cadence.

Instead of just saying, people can figure it out themselves, actually taking the opportunity and making time for people to uplevel. Making explicit documentation about what you chose to do and why, because ultimately, the decisions you make about anything, they have to be available for evaluation, or you’re going to keep evaluating the same options over and over as the platform changes, or the needs of the platform change. New technology and new tools come to be available.

One of the things we do with our samples that we create, as well as with the platform itself, is to document, why do we choose to do this thing? Why are we coding in the open? Why do we use GitHub? That’s a decision record. We want to take note of that. We encourage sharing, we call it show and tell. It doesn’t have any expectations. Sometimes people feel vulnerable about saying, it’s a demo, when it’s half-baked. It’s ok when it’s show and tell, you’re just showing and telling about something you learned, or something you accomplished, something you tested out.

I encourage people to contribute to open-source projects. That seems redundant, because we work in open source, but here’s the challenge, it becomes too insular and siloed. You don’t notice having to work with other people. When you go into open-source projects, you learn better how to evolve your platform, because you’re able to see the different practices across the industry and adjust your expectations of how things could or should work. It’s easy to get caught up in the today, and here’s the focus and here’s the energy.

When you’re contributing to an open-source project, it lets you step away and think about things in a slightly different way. Which goes to the platform is the environment and culture. What is it that you want out of your platform? What is the space that you want to create for people? Thinking about the team rituals and seeding them with things that you care about. Some of the ones I’ve seeded within my team is we start our meetings with music, it gives people a little time. Because we’re a distributed team, we want time for people to connect with each other, and maybe they’ve been busy doing something all week, that music just provides that little easing into that sharing, where we talk through, how are you doing? This is the team temperature check using zones of regulation.

It gives people, if they’re feeling safe, the space and time to talk about how they’re doing and feeling, showing that mutual care for each other. They don’t have to share if they don’t want to. We end the meeting with kudos, just a little bit of gratitude, sets everyone up for a happy rest of their day, maybe fuels them for the rest of the week. When people leave the team, we celebrate it. We don’t just say goodbye. We actually take time to celebrate on those things that we’ve built with them and done with them, and that builds up more trust into the actual building of the platform itself. We play. We create samples and demos that go beyond just, here’s this thing. We think about, how do we engage our active whimsy? That makes it so it’s approachable for other people. We built this train demo: it’s open source. The concept was this game of, can you build a set of components, a working architecture? The logic behind it we calculated, it was like this meta on meta situation where we’re using cloud to build a test on cloud and build education.

Intentionally Evolving the Platform

I’ve talked a little bit about the platform. I’ve talked a little bit about developer performance. Then, how do you evolve the platform? First is keeping in mind the people, because the people are core. A lot of the things I just talked about with parts of the platform are about people. You want to establish an active communication plan. You want to be telling people and informing people on a regular rhythm. You want to make sure you know who you’re talking to and why you’re talking to them. You want to create a RACI, and that’s basically setting up a plan of the clear roles and responsibilities so everyone knows. It’s like the contract. If you do these things, we’ll do those things. Once you’ve identified and documented them, you can embed them into your planning of your technology. Who has capabilities? What are those capabilities? What are the contracts that you’re making, and establish with samples.

For us, the person who owns or is accountable to their sample, if they don’t update their sample, and I can’t automatically update it, then it’s going to get marked as something that can be archived. We’ve agreed to that contract based on the RACI. It’s connecting the effort to the value so people understand, why am I working on this particular thing? Because it helps build value over here. It’s making sure to celebrate the wins.

You can identify the set of metrics that matter to you within your org, a starting place might be DORA or SPACE. When it comes to DORA, there’s a set of metrics that I mentioned earlier that have been shown to show software delivery performance. This is a starting place. For us, when I look at what our environment is, and these are updated metrics since the last time I gave a talk that included metrics, our samples, we have 13,798 samples that we need to monitor and update and maintain. There are another 6000 approximate samples that are not actually in our docs yet. We’re trying to reduce that count so that all of our samples are available in our documentation. We have 8352 distinct use cases, meaning there’s specific journeys that we’re explaining to our developers. How do we think about how we would measure performance or the experience?

Remember, our goal, ultimately, for our platform is a double set of requirements. Right now, we’re focusing on our contributor metrics. Ultimately, we want to empower developers who come to use Google Cloud. Right now, we’re trying to grow our catalog, so our problem is quantity and quality. Our metrics are evolved slightly from the DORA metrics. You can see the hints in them. We want to think about how costly is it to update a sample and to catch problems with it. What is the right amount of effort that we should spend on update our own samples? Areas that are easy to measure are things like time to ship. That’s from the point that you start to submit a PR to the point that it actually gets into documentation, that measurement. It’s shown that high-performing teams are able to do this in hours. It might not surprise you to hear that it takes days to weeks for some of our samples to ship as a baseline. That’s improved.

Rollbacks for us is when a sample goes out into the wild, and then we have to go and make changes to it. That means it’s not like, roll back your production, but something leaked through that actually caused problems, and did not help people. Then, how often we’re delivering samples. A hard one for us to measure is our system green. How much should we be spending on testing our samples? What is that quality? This is the first set of metrics we’ve established to define how productive our developers can be? How effective can they be? Based on these metrics, we can change and adjust what we’re doing.

The first thing we did was we friction logged the sample contribution experience. We’ve done this multiple times, and we’ve gained additional information each time. If you think about how large a company is, and you think about who possibly could help with something, with samples, we could have a large set of folks that could work on samples, except we’ve always thought about it from a, here’s the set of folks, the DPEs, the developer program engineers, they’re the ones working on this code. We want to make it self-service. We want anybody who is interested in contributing samples to be empowered to contribute samples. We need to take on that experience. Talk to the tech writers, talk to the advocates, talk to the sales engineers, talk to the support engineers, find out what is hard about doing this. We’ve uncovered a lot.

One of the challenges we identified is in our review capacity, thinking about how long it takes for code to get into production. Part of this, is there availability for someone to review the code? We realized over time we’d had this set of patterns. We had a whole mentoring program. It took months to get to reviews. We just did not have that ability anymore to have that long lead time. We are taking a risk. We want to trust people to do the right thing, but we want to hold them accountable and make sure we’re measuring and seeing the impact of people’s reviews. Then, we want to recognize the quality behaviors. We have little badges to showcase when people are quality reviewers.

We decided to eliminate flaky testing. Originally, we did some research, and we were like, 78% of our alerts are noise, but they are solving some issues, so maybe we’ll progressively fix this problem. We determined that actually we’re not going to get there anytime soon, and we should quell the noise, so we’ve eliminated our flaky testing. We’re trying out AI. I want to say that for us, core to samples is trust. We know that people copy and paste our sample code directly into their production environment. I’m not saying that’s what they should do, but we recognize it’s what they do. There are things that we have found that we’re exploring with small experiments to determine areas where we can improve the overall experience.

When people file issues across our 100 and something odd repos, what if we’re able to assess things more quickly and consistently based on training from our previous issues to get better results in responding to specific types of issues. We’ve also looked at metadata generation, so that 19,000 samples, 7000 approximately, that are not embedded in documentation. Part of that is because there’s no metadata associated, meaning their title and description, like the intent of the sample. Because the model is trained specifically on our samples, it can provide context and help us to initiate a set of descriptions that helps us get our samples in the autogenerated pages.

We’ve also found that it’s helpful in terms of giving feedback on PRs, so we have those set of extensive style guides. It takes a reviewer knowing and understanding, it takes a contributor knowing and understanding all of those components. If we train a model directly on our style guides, we’re able to get a specific set of feedback that’s helpful that says, here’s where you’re having this problem. It links to the specific style guide issue, and that provides a better experience.

Recap

I’ve talked a lot about different pieces of this, in terms of what is developer productivity and platforms, and my thoughts on them, and about intentionally evolving your platform to solve the value that you are trying to build for your company or your service or your platform. It’s really important, ultimately, when we think about what is the dev experience, and investing some amount of time in that dev experience to help each one of us solve problems that are better for us as an industry.

Questions and Answers

Participant 1: Imagine that I am an IC, an individual contributor in my company, and my company has small silos, each sub-project has their own APIs, their own SDK. When my customers use those sub-projects, each SDK would look different. Being an IC in one of those projects, how can I influence my peers to start to discuss about developer productivity of how to have a cohesive experience over all the platform?

Davis: It goes to the whole culture of the environment. When you’re in a space where your company, it’s very siloed, and that’s what happens, you have to have some kind of leadership change that supports and encourages it. Coming from the grounds up, if you are in this situation, you can reach out and create a technical leads program. That’s one of the things that someone started at Google, actually. It encourages and starts discussions across. When people find these common problems, people want to help. Another part of it is navigating how you’re discussing and sharing or advocating for the problem. When we talk to leaders, as an IC and you’re going to your technical lead, or you’re not considered an official technical lead, but you see a problem.

If you frame it as, there’s this problem, and I need to fix it. In this case, we’re shipping all these APIs, and they’re all different, and the user experience is not quality. It seems obvious, there’s a problem, we should fix this. Ultimately, it comes down to communicating in the language of whoever you’re talking to and knowing what’s important to them. In your case, it’s like really challenging, because you can inspire and get everyone on board, yes, but then do you have investment to make change. You speak to, depending on how people are motivated, all our competitors are doing this, look at that. That’s one way.

Another way is, “I did a friction log. Here’s the things that create a lot of friction”. Talk to support engineers. Get that support there. If you can reduce support costs, because those are very expensive, like by the time something is problematic in the environment and a customer’s reporting, that’s costly. Or if you can improve people’s productivity, that’s a set of things that can change leaders’ minds. You don’t talk about it from the problem. You talk about it from the outcome and how it will support things.

Participant 2: You have 15 awesome examples of things to work on, if you were to pick one to start with, which one would it be?

Davis: My first step for me was figuring out what the problem was. I’ve described a lot of problems, but I didn’t describe the big one. The big one is fragmentation, so that we’re spending a lot of effort spending. The very first thing is to figure out, is there areas you’re spending? Then navigate how you talk to people, your leadership, your peers, your reports, whatever the case is, and identify how you can help people change their minds. It’s not easy. Getting people to think about disabling flaky bot was hard. It takes time to get to a decision where people are comfortable because you’re making change. You don’t want to do wholesale big changes, so you have to identify, what is going on? What are the risks? What are people afraid of? What are people wanting? What are people valuing? Once you establish that, you can tackle whatever the next thing is, and be willing to fail.

Participant 3: I have a question related to one of the measurements that I look at when I try to measure the engagement and motivation of my developers and data scientists. I would like just to get your opinion on that. They are telling us that they would like to see the connection between their actual work and the mission and vision of the company. Sometimes for us as managers, leaders, it’s easy to see that kind of connection, but it’s hard actually to break it down in actual projects, and most importantly, to show them the linkage between their everyday work and the vision, mission of the company or the organization. What would be your advice for this kind of work as a manager and leader?

Davis: If you think about it, and I’m going to add a little more context, samples, one of the things we want is we want to trust. In open source, you really want people to trust you. Any time you talk about tracking and data collection, because you want to collect data to improve, not to do anything bad, but to improve, it causes problems. Then, how do you, as an IC at a company, identify what changes actually matter and what’s valuable? As a whole, our samples, people can go to GitHub, and they can go copy and paste, but how do I know it’s actually helpful?

One thing is, depending on the set of abilities of tracking you have within your teams, you can identify and see how many things are deployed. We have something called Jumpstart solutions, where we can see direct impact of if someone deploys a solution, how long it stays deployed, how it evolves. Are they sticky? Are we enabling people? Are they learning more? What does that impact? When people can see those real numbers of you’ve enabled or you’ve engaged, it’s great, but it’s tricky. You have to map things into a different framing.

Part of this is getting people to talk about what you’re working on, and then tying it explicitly into the larger orgs, and repeating over again the message, “This is to do this, and it’s driving these sets of changes”. Recognizing that if they’re not seeing that value return, if the feedback loop isn’t bringing back and changing their work. If you’re not doing retros or post-mortems or whatever you want to call it, and you’re not incorporating change, it just feels like they’re throwing stuff out into the void. It’s really important to incorporate practices that also enable the learning loop.

Participant 4: You are talking about developer productivity. We have several smart developers on the team who want to be productive, more productive than they are right now, but they all have different definitions of what productive means. What’s your recommendation on reconciling the definitions and how to do it in politically nice terms without offending them much?

Davis: When we think about productivity, that’s why part of it is productivity of a team as a whole versus productivity as individuals. Individuals can be productive in whatever way they want, how they measure, it’s great. For me, I will tell my boss, I need my cookies, because when I get my cookies, they’re not real cookies, it’s just like, good job, that’s my cookie. It’s not even qualitative, it’s just occasionally I need that. I have my own set of metrics for my performance, but it’s one of the things why it’s really crucial, what you measure is going to influence what you get. If you say I need lines of code, you’re going to get more lines of code.

If you say, I want clicks to a URL, I know how to write a nice little tool that will automate clicks to a URL, because I value about different things. To change what you’re saying a little bit, it’s ok for everyone to have different measures of productivity, but clearly articulating with a common set of work vocabulary, clearly articulating what the goal is. We’re building cars, your part is that cog. If you have too many cogs, this other piece needs focus, and this is creating a bottleneck. Encouraging people to have mutual care and reciprocal trust to engage and enable each other. It’s really important as a manager to note when someone is not performing and to manage that in a kind way, because ignoring performance issues harms the team.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Covea Finance Takes Position in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Covea Finance bought a new position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) during the fourth quarter, according to its most recent Form 13F filing with the Securities & Exchange Commission. The firm bought 16,500 shares of the company’s stock, valued at approximately $3,841,000.

Several other institutional investors and hedge funds have also recently added to or reduced their stakes in the company. Hilltop National Bank grew its holdings in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after purchasing an additional 42 shares in the last quarter. Quarry LP boosted its stake in MongoDB by 2,580.0% during the second quarter. Quarry LP now owns 134 shares of the company’s stock worth $33,000 after buying an additional 129 shares in the last quarter. Brooklyn Investment Group bought a new stake in MongoDB in the 3rd quarter valued at $36,000. GAMMA Investing LLC raised its stake in shares of MongoDB by 178.8% in the 3rd quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock valued at $39,000 after buying an additional 93 shares in the last quarter. Finally, Continuum Advisory LLC boosted its position in shares of MongoDB by 621.1% during the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after acquiring an additional 118 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors.

Insider Buying and Selling at MongoDB

In other MongoDB news, CAO Thomas Bull sold 1,000 shares of the stock in a transaction on Monday, December 9th. The shares were sold at an average price of $355.92, for a total value of $355,920.00. Following the sale, the chief accounting officer now directly owns 15,068 shares in the company, valued at approximately $5,363,002.56. This trade represents a 6.22 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which is accessible through the SEC website. Also, CEO Dev Ittycheria sold 2,581 shares of the business’s stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $604,186.29. Following the completion of the transaction, the chief executive officer now directly owns 217,294 shares of the company’s stock, valued at $50,866,352.46. This trade represents a 1.17 % decrease in their position. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 23,776 shares of company stock valued at $6,577,625. 3.60% of the stock is currently owned by company insiders.

MongoDB Trading Up 0.9 %

Shares of NASDAQ MDB opened at $242.41 on Wednesday. The firm has a market capitalization of $18.05 billion, a PE ratio of -88.47 and a beta of 1.25. The firm’s 50-day moving average is $280.67 and its 200-day moving average is $269.51. MongoDB, Inc. has a 1 year low of $212.74 and a 1 year high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Monday, December 9th. The company reported $1.16 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.68 by $0.48. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm had revenue of $529.40 million for the quarter, compared to analysts’ expectations of $497.39 million. During the same period last year, the firm earned $0.96 earnings per share. The business’s revenue for the quarter was up 22.3% on a year-over-year basis. As a group, equities research analysts anticipate that MongoDB, Inc. will post -1.86 earnings per share for the current fiscal year.

Analyst Upgrades and Downgrades

Several research analysts recently issued reports on MDB shares. Mizuho raised their price target on shares of MongoDB from $275.00 to $320.00 and gave the stock a “neutral” rating in a research note on Tuesday, December 10th. Tigress Financial boosted their price target on MongoDB from $400.00 to $430.00 and gave the company a “buy” rating in a research note on Wednesday, December 18th. Macquarie began coverage on MongoDB in a research note on Thursday, December 12th. They set a “neutral” rating and a $300.00 price objective for the company. KeyCorp boosted their target price on shares of MongoDB from $330.00 to $375.00 and gave the company an “overweight” rating in a research note on Thursday, December 5th. Finally, Wedbush upgraded shares of MongoDB to a “strong-buy” rating in a research report on Thursday, October 17th. Two research analysts have rated the stock with a sell rating, four have assigned a hold rating, twenty-two have issued a buy rating and one has issued a strong buy rating to the company. Based on data from MarketBeat.com, the company presently has a consensus rating of “Moderate Buy” and an average target price of $364.64.

Get Our Latest Stock Analysis on MDB

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks to Own Before the 2024 Election Cover

Looking to avoid the hassle of mudslinging, volatility, and uncertainty? You’d need to be out of the market, which isn’t viable. So where should investors put their money? Find out with this report.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Harbor Capital Advisors Inc. Sells 2,226 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Harbor Capital Advisors Inc. lessened its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 63.1% during the 4th quarter, according to its most recent filing with the Securities and Exchange Commission. The firm owned 1,301 shares of the company’s stock after selling 2,226 shares during the quarter. Harbor Capital Advisors Inc.’s holdings in MongoDB were worth $303,000 as of its most recent SEC filing.

Other hedge funds and other institutional investors have also recently added to or reduced their stakes in the company. Hilltop National Bank increased its position in shares of MongoDB by 47.2% during the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after acquiring an additional 42 shares in the last quarter. Quarry LP boosted its stake in MongoDB by 2,580.0% in the 2nd quarter. Quarry LP now owns 134 shares of the company’s stock worth $33,000 after purchasing an additional 129 shares during the period. Brooklyn Investment Group bought a new position in MongoDB during the 3rd quarter worth about $36,000. Continuum Advisory LLC raised its stake in shares of MongoDB by 621.1% in the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after purchasing an additional 118 shares during the period. Finally, GAMMA Investing LLC lifted its holdings in shares of MongoDB by 178.8% in the third quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock valued at $39,000 after purchasing an additional 93 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors.

MongoDB Price Performance

MDB traded up $7.40 on Wednesday, hitting $249.81. 121,113 shares of the stock were exchanged, compared to its average volume of 1,432,734. MongoDB, Inc. has a 12 month low of $212.74 and a 12 month high of $509.62. The firm has a market capitalization of $18.60 billion, a price-to-earnings ratio of -91.17 and a beta of 1.25. The business has a 50-day moving average of $280.67 and a two-hundred day moving average of $269.51.

MongoDB (NASDAQ:MDBGet Free Report) last posted its earnings results on Monday, December 9th. The company reported $1.16 earnings per share for the quarter, topping analysts’ consensus estimates of $0.68 by $0.48. The firm had revenue of $529.40 million for the quarter, compared to analyst estimates of $497.39 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company’s revenue for the quarter was up 22.3% compared to the same quarter last year. During the same period in the previous year, the firm posted $0.96 EPS. Equities research analysts expect that MongoDB, Inc. will post -1.86 earnings per share for the current fiscal year.

Analysts Set New Price Targets

A number of research analysts recently issued reports on MDB shares. Rosenblatt Securities started coverage on MongoDB in a report on Tuesday, December 17th. They set a “buy” rating and a $350.00 price objective for the company. Stifel Nicolaus raised their price target on shares of MongoDB from $325.00 to $360.00 and gave the stock a “buy” rating in a research note on Monday, December 9th. Tigress Financial boosted their price objective on shares of MongoDB from $400.00 to $430.00 and gave the stock a “buy” rating in a research report on Wednesday, December 18th. Monness Crespi & Hardt cut shares of MongoDB from a “neutral” rating to a “sell” rating and set a $220.00 target price for the company. in a report on Monday, December 16th. Finally, Mizuho lifted their price target on shares of MongoDB from $275.00 to $320.00 and gave the company a “neutral” rating in a research note on Tuesday, December 10th. Two equities research analysts have rated the stock with a sell rating, four have given a hold rating, twenty-two have given a buy rating and one has given a strong buy rating to the company. According to data from MarketBeat.com, the company presently has an average rating of “Moderate Buy” and a consensus target price of $364.64.

View Our Latest Report on MongoDB

Insider Transactions at MongoDB

In other MongoDB news, CFO Michael Lawrence Gordon sold 5,000 shares of the stock in a transaction that occurred on Monday, December 16th. The stock was sold at an average price of $267.85, for a total transaction of $1,339,250.00. Following the transaction, the chief financial officer now directly owns 80,307 shares in the company, valued at approximately $21,510,229.95. This represents a 5.86 % decrease in their position. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this link. Also, CAO Thomas Bull sold 169 shares of MongoDB stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $39,561.21. Following the completion of the sale, the chief accounting officer now owns 14,899 shares of the company’s stock, valued at $3,487,706.91. This represents a 1.12 % decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 23,776 shares of company stock worth $6,577,625 over the last three months. 3.60% of the stock is owned by company insiders.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks to Own Before the 2024 Election Cover

Looking to avoid the hassle of mudslinging, volatility, and uncertainty? You’d need to be out of the market, which isn’t viable. So where should investors put their money? Find out with this report.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Database Storage Engines Have Evolved for Internet Scale – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Databases / Operations / Storage“><meta name="x-tns-authors" content="“>


How Database Storage Engines Have Evolved for Internet Scale – The New Stack


<!– –>

As a JavaScript developer, what non-React tools do you use most often?

Angular

0%

Astro

0%

Svelte

0%

Vue.js

0%

Other

0%

I only use React

0%

I don’t use JavaScript

0%

2025-01-14 09:00:33

How Database Storage Engines Have Evolved for Internet Scale

sponsor-aerospike,sponsored-post-contributed,

Out-of-place updates drive excellent write performance relative to in-place updates, but sacrifice read performance in the bargain.


Jan 14th, 2025 9:00am by


Featued image for: How Database Storage Engines Have Evolved for Internet Scale

The design of database storage engines is pivotal to their performance. Over decades, SQL and NoSQL databases have developed various techniques to optimize data storage and retrieval.

Database storage engines have evolved from early relational systems to modern distributed SQL and NoSQL databases. While early relational systems relied on in-place updates to records, modern systems — both distributed relational databases and NoSQL databases — primarily use out-of-place updates. The term “record” is used to refer to both tuples in a relational database as well as key-values in a NoSQL store.

Out-of-place updates became popular as a result of the extremely heavy write workloads that modern databases encountered with the advent of internet-scale user events, as well as automated events from sensors (e.g., Internet of Things) flowing into a database.

These two contrasting approaches — in-place updates and out-of-place updates — show how out-of-place updates drive excellent write performance relative to in-place updates, but sacrifice read performance in the bargain.

Layers of a Storage Engine

Let’s begin with an overview of the layered architecture of storage engines. A database storage engine typically consists of three layers:

  1. Block storage: The foundational layer, providing block-level access through raw devices, file systems or cloud storage. Databases organize these blocks for scalable data storage.
  2. Record storage: Built atop block storage, this layer organizes records into blocks, enabling table or namespace scans. Early relational systems usually updated records in place while the more modern storage engines use out-of-place updates.
  3. Access methods: The topmost layer includes primary and secondary indexes, facilitating efficient data retrieval. Updates to access methods also can be in place or out of place, as we will see shortly. Many current systems apply the same methodologies, in-place updates or out-of-place updates, for both the record storage and access methods. We will therefore talk about these two layers together in the context of how they are updated.

Let’s delve deeper into each layer.

Block Storage

At its core, the block storage layer organizes data into manageable units called blocks (B1 and B2 in Figure 1 below). These blocks act as the fundamental storage units, with higher layers organizing them to meet database requirements. Figure 1 illustrates a basic block storage system. Record storage and access methods are built on top of the block storage. There are two broad categories of record storage and access methods corresponding to whether updates happen in place or out of place. We will describe the record storage and access methods under these categories next.

Figure 1: Block storage showing blocks B1 and B2.

Figure 1: Block storage showing blocks B1 and B2.

Storage and Access Methods With In-Place Updates

The approach of updating records and the access methods in place was the standard in early relational databases. Figure 2 (below) illustrates how a block in such a system is organized and managed to provide a record storage API. Notable features of such a record storage layer include:

  • Variable length records: Records often vary in size, and the size may change during updates. To minimize additional IO operations during updates, the record storage layer actively manages block space to accommodate updates within the block.
  • One level of indirection: Each record within a block is identified by a slot number, making the record ID (RID) a combination of the block ID and slot number. This indirection allows a record to move freely within the block without changing its RID.
  • Slot map: A slot map tracks the physical location of each record within a block. It grows from the beginning of the block while records grow from the end, leaving free space in between. This design allows blocks to accommodate a variable number of records depending on their sizes, and supports dynamic resizing of records within the available space.
  • Record migration: When a record grows too large to fit within its original block, it is moved to a new block, resulting in a change to its RID.
Figure 2: Record storage for in-place updates showing how a block is organized internally.

Figure 2: Record storage for in-place updates, showing how a block is organized internally.

Access methods are built on top of record storage to efficiently retrieve records. They include:

  • Primary indexes: These indexes map primary key fields to their corresponding RIDs.
  • Secondary indexes: These indexes map other field values (potentially shared by multiple records) to their RIDs.

If the index is completely in memory, then self-balancing trees, such as red-black (RB) trees, are used. If the index is primarily on disk (with parts possibly cached in memory), B+-trees are used. Figure 3 shows a B+-tree on top of a record storage. The primary index as well as the secondary index would have the same format for the entries (field value and RID).

Figure 3: B+-tree on top of record storage. 

Figure 3: B+-tree on top of record storage.

Combining Access Methods and Record Storage

In some systems, the access method and record storage layers are integrated by embedding data directly within the leaf nodes of a B+-tree. The leaf level then essentially becomes a record storage, but additionally is also now sorted on the index key. Range queries are made efficient as a result of this combination compared to an unsorted record storage layer. However, to access the records using other keys, we would still need an access method (an index on other keys) on top of this combined storage layer.

Storage and Access Methods With Out-of-Place Updates

Most modern storage engines, both distributed NoSQL and distributed SQL engines, use out-of-place updates. In this approach, all updates are appended to a current write block maintained in memory, which is then flushed to disk in one IO when the block fills up. Note that durability of the data before the write hits the disk if this node were to fail is mitigated by the replication within the distributed database. Blocks are immutable, with records packed and written only once, eliminating the need for space management overhead. The older version of the record will be garbage-collected by a cleanup process if that is desired. This has two advantages:

  1. Amortized IO cost: All the records in the write block together need one IO compared to at least one IO per record for in-place updates.
  2. Exploits sequential IO: These techniques were invented in the era of magnetic hard disk drives (HDD), and sequential IO was way superior to random IO in HDDs. But even in the era of SSDs, sequential IO is still relevant. The append-only nature of these systems lends itself to sequential IOs.

The most well-known and commonly used form of out-of-place update storage engines use a data structure called log-structured merge-trees (LSM-trees). In fact, LSM-trees are used by almost all the modern database storage engines, such as BigTable, Dynamo, Cassandra, LevelDB and RocksDB. Variants of RocksDB are employed by systems like CockroachDB and YugabyteDB.

LSM-Trees

The foundational concepts for modern LSM-tree implementations originate from the original paper on the concept, as well as from the Stepped-Merge approach, which was developed concurrently.

The Stepped-Merge algorithm arose from a real, critical need: managing the entire call volume of AT&T’s network in 1996 and recording all call detail records (CDRs) streaming in from across the United States. This was an era of complex phone billing plans — usage-based, time-of-day-based, friends-and-family-based, etc. Accurately recording each call detail was essential for future billing purposes.

However, the sheer volume of calls overwhelmed the machines of the time, leading to the idea of immediately appending CDRs to the end of record storage, followed by periodic “organization” to optimize lookups for calculating bills. Bill computations (reads) were batch jobs with no real-time requirements, unlike the write operations.

The core idea behind solving the above problem was to accumulate as many writes as possible in memory and write it out as a sorted run at level 0 once memory fills up. After a certain number, T, of level 0 runs are available, they are all merged into a longer sorted run at level 1. During the merge, duplicates could be eliminated if required.

This process of merging T-sorted runs at level i to construct a longer run at level i+1 continues for as many levels as is required, drawing inspiration from the external sort merge algorithm. This idea is very similar to the original LSM-tree proposal and forms the basis of all modern LSM-based implementations, including the concept of T components per level. The merge process is highly sequential-IO friendly, with the cost of writing a record amortized over multiple sequential-IO operations for several records.

However, the reads, in the worst case, must examine every sorted run at each level, incurring the penalty of not updating in place. Yet, looking up a key in a sorted run is made efficient by an index, such as a B+-tree, specific to that sorted run. These B+-trees directly point to the physical location (as opposed to a RID), since the location remains constant. Figure 4 illustrates an example of an LSM-tree with three levels and T=3 components per level.

The sorted runs are shown as B+-trees to optimize read operations. Notice that the leaf level represents the sorted run, while the upper levels are constructed bottom-up from the leaf (a standard method for bulk loading a B+-tree). In this regard, an LSM-tree can be considered a combination of an access method and a record-oriented storage structure. While sorting typically occurs on a single key (or a combination of keys), there may be cases requiring access via other keys, necessitating secondary indexes on top of the LSM-tree.

Figure 4: Example LSM trees with three levels on disk and three components per level.

Figure 4: Example LSM trees with three levels on disk and three components per level.

Comparing In-Place and Out-of-Place Updates

The table below compares key features of storage engines of early relational systems with those developed for modern storage engines. It assumes that one record is being written and one primary key value is being read. For early relational systems, we assume the presence of a B+-tree index on the primary key (the details of whether the leaf level contains the actual data or a record identifier (RID) do not significantly affect this discussion). For the LSM-tree (most common modern storage engines), the assumption is that the sorted runs (and the B+-trees) are based on the primary key.

Conclusion

Storage engines have evolved to handle the heavy write workloads many database systems encountered with the advent of internet scale. LSM-trees have become popular to solve this challenge of handling heavy write workloads. However, LSM-trees do give up on real-time read performance relative to the infrastructure processing unit (IPU)-based storage engines used in early relational systems. Under some circumstances, it may be wise to find a system that blends the best of both of these ideas: Use out-of-place updates for record storage to be able to continue to handle the write-heavy workload, but use in-place updates for access methods to minimize the read overhead.

Visit our website to learn more about Aerospike Database.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

Group
Created with Sketch.







Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Where is the Art? A History in Technology

MMS Founder
MMS Andy Piper

Article originally posted on InfoQ. Visit InfoQ

Transcript

Piper: In October 1971, a gentleman called Frieder Nake published a note in PAGE, the Bulletin of the Computer Arts Society, entitled, “There Should Be No Computer Art”. “Soon after the advent of computers, it became clear that there was a great potential application for them in the area of artistic creation”, he began. “Before 1960, digital computers helped to produce poetic text and music. Analog computers, or only oscilloscopes, generated drawings of sets of mathematical curves and representations of oscillations. It was not before the first exhibitions of computer produced pictures were held in 1965 that a greater public took notice of this threat, as some said, progress, as some thought. I was involved in this development from its beginning onward in 1964.

I found the way the art scene reacted to the new creations, interesting, pleasing, and stupid. I stated in 1970 that I was no longer going to take part in exhibitions. I find it easy to admit that computer art did not contribute to the advancement of art, if we judge advancement by comparing the computer products to all existing works of art. In other words, the repertoire of results of aesthetic behavior has not been changed by the use of computers. This point of view, namely, that of art history, is shared and held against computer art by many art critics. There is no doubt in my mind”, he said, “that interesting new methods have been found which can be of some significance for the creative artist”.

As you might imagine, this was a bit of a controversial take. Here was a man who had for part of the previous decade been an insider, been an advocate for the use of algorithmic and generative processes to create art. He’d taken part in exhibitions around the world in that mid-60’s period up to 1970, and his output in that period was estimated at around 300 to 400 works in ink produced on a high precision flatbed plotter. Frieder Nake’s Wikipedia page, says this, “His statement was rooted in a moral position.

The involvement of computer technology in the Vietnam War and in massive attempts by capital to automate productive processes and thereby generate unemployment, should not allow artists to close their eyes and become silent servants of the ruling classes by reconciling high technology with the masses of the poor and suppressed”. I’ll just finish this piece by reading another piece from what he posted in that article, which is, “Questions like, is a computer creative, or is a computer an artist, or the like, should not be considered serious questions, period. In the light of the problems we are facing at the end of the 20th century, those are irrelevant questions”.

Background

I’m Andy. I live primarily online on the federated social web. For the past 25 years, I’ve worked alongside you in the technology industry, and I expect to continue to do so. I graduated in 1997 with a degree in modern history, which has always made me something of an interesting person at career fairs, when my employers have been rolling me out to encourage folks to get involved in technology. I’m self-taught as a developer. I’m not here to talk about AI and large language models, and generative creation of art using those means. Instead, we’re going to go on a journey and look at one aspect of computer history, and that is creative technology and art, and the ways in which it’s been considered a threat and misunderstood at different times in our shared history. We’ll also find out how I’ve accidentally become an artist. We start out in some despair. We’re going to go through some discovery. I hope, I certainly have, we will find some delight.

The Event

Let’s talk about what happened, the event. This is my euphemistic term for my recent career path. I was laid off from my dream job from a company that no longer exists. I spent nine years of my life working there, and I spent 15 years of my life on that platform, passionate about enabling people to communicate in real-time, openly around the world. For the first time in our shared history, I think that platform created a lot of opportunities and new ways to communicate that we’ve inherited from it. All it took was one spiteful billionaire to change everything in a moment and tear it all down. It was a dramatic change, and I knew that I was going to have to take some time away from what I’d been doing, take a step back.

During and since the pandemic, my wife and I had found our house filling up with our hobbies, gubbins, just lots of things I like to play around with, electronics and retro gadgets and things. My wife likes to sew and do other things with her handcrafting, and the house was filling up. We thought we’d try to find a space for those. We ended up renting an art studio and moving our hobbies there. There’s a longer story here, if you’re interested, about the difficulties of renting office space if what you want to do is hot soldering. I’ve had some interesting conversations about insurance there. I’ll tell you about this adventure, and then we’ll come on to the art history piece I mentioned.

While I was working through that layoff process, which took a longer time than you might expect, for reasons, we were also getting settled into this art studio. We’ve got a space over in Southwest London. It’s just outside Wimbledon. It’s an old converted paper warehouse. It’s got a range of artists, painters, sculptors, ceramicists, photographers, folks that do picture framing as well. My wife, Heidi, and I, moved in there, and we didn’t really think of ourselves as going there to do art. We just wanted a space for our hobbies. I excitedly started going to IKEA, getting lots of shelving, getting all my stuff set up. I grew up in the 1980s and early ’90s, and loved the 8-bit era of computers, where you probably may or may not have taken the thing apart and had a look inside. I had an Acorn Electron at home, and I had BBC Micros at school.

For me, the reemergence in the last 10 to 15 years of affordable, accessible technology with an educational focus, we’re talking about things like Arduino and Raspberry Pi, has really enabled me to re-engage with my early passion for technology. I love solving problems with code, and I also love to tinker around with electronics. In fact, about three years ago, I started to get involved with the MicroPython project. MicroPython is a rewrite of Python, implementation of Python that runs on microcontrollers. Once you’ve connected up a few sensors to your little small computing board, you might want to put that into a case. As luck would have it, my friend was getting rid of an old 3D printer, so I inherited that from him. Before long, I had this whole studio, and I had three 3D printers. Not that I printed more, but I could have done that as well.

The Makerspace

Within a few months, at the beginning of 2023, my wife and I built up this small makerspace for the two of us. She was using her cutting machine to create crafts with vinyl, and I only had to 3D print, but we still didn’t really know what we were doing there. We just had this space. I was really in denial. It was quite a traumatic experience watching what happened at my former employer. Several of our immediate neighboring artists in the studios are very traditional painters. Have been there for quite a long time, very different styles. We didn’t really feel very connected to them. The Wimbledon Art Studios run a show twice a year. Our first show was this time last year, last May, and we were invited to take part. We thought, it sounds great. A lot of the other artists were encouraging us and saying, “Great commercial opportunity”. What are we going to sell? I thought I could print some bits and pieces, some pots and some trays.

My wife was creating things with her vinyl cutter, creating bags and T-shirts and things. I thought it’d be good to have something a bit more interesting than that. I got a little toy 3D printer. You can get them for less than £100, not very high quality, but great for playing with learning. I put one of those out. Then I’ve also seen this thing in a magazine, which I thought I’d have a go at making, so that if people came into the studio, I could talk about technology. This is BrachioGraph. BrachioGraph means arm-writer. It was created by a gentleman called Daniele Procida. He presented it at PyCon UK in 2019. It’s super simple, you can see here. It’s made up of lollipop sticks. It’s got three small servomotors, a little clip to hold the pen, and beyond that, you’ve got a Raspberry Pi Zero. The code is all in Python. It’s all open source. The recipe is online for about £20, just a little bit over. You too can build your own drawing robot. It’s lovely and basic.

One of the things I love about it is its limitations. It’s two arms on rotational joints, it’s like your arm. It can only draw, however, curves. It cannot move in straight lines. The lines are wiggly and inaccurate. They’re very cheap motors, it’s very slow. It’s going to draw a little bit for us. I put this out on display, and I had a few things being drawn as people came through the studios, and we were talking about 3D printing and other things. Again, we were very different to what all the other artists were doing. I’d get to talk to, especially the youngsters who would go, “Dad, look, this robot’s drawing things”. If they liked whatever it had produced, I just gave them the bit of paper to take away home with them. I enjoyed explaining the limitations and how it worked with the arm and the fact that it can’t draw straight lines.

In order to draw straight lines, you’re going to need high precision. You’re going to need an x and y axis to move your pen around. Something else has an x and y axis and also has a z axis, and that’s a 3D printer. A plotter is a 2D printer. A 3D printer has a z axis, moves up and down, as well as left and right and around. You put an extruder on the top and squirt hot plastic through, and you’ve got a 3D printer. Things were just starting to come together, in my mind, as we did this. This was never going to threaten our neighbors in the studios, and I don’t want to threaten our neighbors in the studio. This is not something that alarmed anybody in the studios as the new fancy artist in town.

Reemergence of Pen Plotters

After that show, I thought, I’m interested in this. I’m going to go and buy a proper one, because everybody got really engaged in my plotter. This is a proper plotter that you could buy commercially. It’s called an AxiDraw. This is a nice one. I got that in. My wife got really interested as well. This is a piece that actually we’re going to have in the show in May. It’s got a lovely open-source ecosystem, this machine. The hardware is not open source, but you can drive it using Inkscape. It’s got a Python API. Once that arrived, my wife got interested. This is a sped-up thing that you can see. It’s nice and accurate. This is just drawing a quick postcard. I immediately regretted that I only got the A4 version, because I want to now do really big things. Once I started to think more about the space that we’d found ourselves in, several things began to emerge.

The first one is that as I looked at what I was doing, and then starting to discover what other people were doing with plotters, I realized that this was not new at all. There’s quite a renaissance as people today are starting to use pen plotters a bit more again, but as we’ll see, people have been using pen plotters for a long time. I also realized that while we could transform images to lines and draw them out using a plotter, in the case of the BrachioGraph, I gave it a picture, and it transformed that picture into what was drawn. You can also go directly without having an intermediate picture stage. You can write some code and drive the plotter. You don’t need to be creative to come up with something first, if you don’t want to. You don’t have to be artistic. You don’t have to have some huge artistic vision to come up with something that’s interesting.

Another thing that I particularly fell in love with was that this is about tangible, physical output. I think digital creations are amazing and fantastic and fun, but there’s a whole new rabbit hole you find yourself going down when you start getting interested in the different materials that you’re using as well.

We had another show coming up in November last year, and I, because I’m constantly living online, reading what people are doing, discovered this article in HackSpace magazine, which is available for free online, as well as in the shops, in paper form, if you prefer. A gentleman called Ben Everard had bought a cheap plotter online with laser cut pieces using an Arduino, and found that the code didn’t work. He got frustrated by that, and decided that he wanted to recreate the thing himself. He wrote an article about it. I tried to follow this article and found that it had quite a few missing details. The process of building this plotter was a bit more complicated. I needed to 3D print some parts. I needed to put together a small circuit using a Raspberry Pi Pico, which is a microcontroller.

The way that this plotter works is a hanging plotter, and it’s called a polargraph. You have this central gondola that moves around on strong cables, otherwise known as cotton, pieces of cotton from pulleys. It’s pretty cheaply made. It’s a little bit annoying, because it’s polar, it always needs to recenter in the middle of the board. There’s nothing automatic to do that. You’re trying to press buttons to get the thing to come back to the center each time. I’m not going to give you a blow-by-blow account of how I built this. You can go and read about it on my website, if you would like. There’s a project page for it. I’ll tell you a little bit about it. G-code in 3D printing, is simply a set of instructions that tell a 3D printer how to move the head in x and y and z, and at what points to heat up the filament and temperature to push it through at. Plotters work on a very similar set of principles, G-code will give it a set of x and y instructions.

Open-source software here has benefited both the 2D printer plotter and 3D printer areas. CNC machines and drilling machines also use the same set of instructions. This plotter simply runs a piece of code originally written for the Arduino, called GRBL. You just transform your image into this G-code, fire it over a serial port to, in this case, the Raspberry Pi Pico, and it sends instructions, and it moves around and draws things. Away goes the plotter. It’s simple and it works.

As well as having built that, my wife and I actually used the AxiDraw, the proper plotter, to start to create some art ourselves. There’s a lovely piece of software, software solves all problems, or a lot of them anyway, that enables us to take our images and decompose them into lines that the plotter can create. You’ll see here on the right-hand side of the image, are some pictures of lighthouses. Those are pictures of fairly low-quality digital camera taken images from around the Great Lakes in the U.S., where Heidi is from, and they were taken about 20 years ago. They weren’t high quality. If you transform them into some plotter art, you get these quite nice effects.

On the left-hand side of the image are some things I came up with which are a bit more abstract, decomposed circuit-like type diagrams. We hung these up on the wall outside. If you met me during the event here, and I gave you a card, you got a small Andy Piper original on the back of the card. We had those outside. We had those up for sale. I put the hanging cluster inside the door, and people came through. It was the 60th anniversary of the best TV show in the world that Jeremy knows all about, because he made the first computer game for. I had that drawing on the wall as well. It was a little bit of fun. It definitely got people talking about what we were doing. We could draw them in and say, come and see a machine drawing things.

Where is the Art?

A lot of the visitors were much more interested. We had things on the wall outside the studio this time. We had something to talk about. One specific woman came and said to my wife, we’re looking at the lighthouses, when she said, “Where’s the art in this?” My wife has a little bit less patience than me. I’ve been doing developer relations for 15 years, so I hopefully am a little bit more tactful.

Before she got too annoyed, I jumped in and started talking this lady through the process of choosing the image, working out which algorithm would be good for processing it into line art, the choice of pens and materials, actually putting it through the plotter, the whole process. It’s not the same as taking a photo on your phone of Big Ben, and going home and sending it to your photo printer and printing out 10 copies. Very few people print out copies I think of photos anyway. It’s not the same, each one of those copies is a carbon copy. Each one of these is a unique thing. She seemed quite satisfied once she got through her inquiry and made her point. I think this is really interesting because it comes back to Frieder Nake and what was happening in 1970 when he was finding that the traditional art world was saying, no.

Contemporary Plotter Artists

One of my favorite 1960s art pieces is this piece by Georg Nees, in 1968, it’s called Schotter, which means gravel in English. This was a period, mid-1960s when computers were not small, didn’t have rich graphical displays, didn’t have easy to use input devices. I think this is quite a lovely thing. It’s a very simple algorithm. You draw a square, and then you repeat that square, adding a little bit of noise for each iteration, and then you get this lovely collapsing effect that I find visually pleasing. Nees was interested in the relationship between order and chaos.

This piece is now in the collection at the Victoria and Albert Museum, so it must be art. This was entirely created using code. It’s often difficult today to take the programs from 1968, in this case, and rerun them. This was written in ALGOL. Of course, computer systems have moved on. You certainly wouldn’t be running something today at the same speed that a system was creating it in 1968.

Often, the compilers, the interpreters, have gone away as well. Input and output device is completely different. It’s incredibly cool to me that you can download a Rust package called whiskers today, fire it up, and do that exact same thing in real-time with sliders that let you modify all of the parameters and experiment and see what the effects would be. Today we have programming environments like Processing, p5.js. There are packages in Rust and Python and others that let you do some incredibly fun things in an experimental way. There’s another strand here which I’m going to leave hanging and let you go and research if you’re interested on your own, around preservation of our shared computer history and past, and how we can preserve those original ideas.

I went to the V&A, and a lot of the pieces are not on permanent display. They do have cyclical exhibitions. Schotter, you can find it on their website. You get the information from the page. You send them an email, say, I’d like to see this piece, please, and you get invited to go and have a look. You’ve got to give them about a week’s notice. I read the piece of information about the original A4 size sheet. I sat in the library there at the V&A. The lady wheeled out a trolley with the items I’d ordered. This was one of them. This is, in fact, a 1-meter-high lithograph, because this was the display piece. This was the piece that was actually displayed in Montreal in 1972. It’s quite fascinating.

Down in the corner, you see this little detail, and you’ll find this across Nees pieces, because Nees, as it happened, worked for Siemens in the 1960s, a Siemens System 4004. This is a Siemens System 4004, as you can see, highly portable, very easy to tinker with. This was a computer from the 1971 movie, “Charlie and the Chocolate Factory”, that helped to find the location of the final golden ticket, so IBM System/360 compatible.

To show you a couple of other pieces from the V&A. This is one from a gentleman called Peter Struycken. This one’s from 1969. I love this one because we actually have the original code preserved here, and because this is QCon, let’s have a look at the code. I don’t know what language this is. I had a look. There are some familiar structures there. I cut and posted it into Google Gemini. I said, “Can you help me figure out what language this is in?” It said, “It might be ALGOL, it might be Pascal, I can’t quite tell. Give me some more context”. I said, it was written by Peter Struycken. Gemini gave me the Wikipedia entry for him. Didn’t tell me how to rerun this code. This is another amazing piece from the collection.

This one is fascinating to me because it actually dates from 1962 as you’ll see. It’s from a British artist called Desmond Paul Henry. This is not from a programmable computer. It is from a mechanical computer. It is from a mechanical computer that was used as a mechanical army analog bomb site computer, actually, that he repurposed. He rebuilt that computer into three drawing machines, and took their swinging arm parts and attached pens to them. I did like the texture, and you can see the line the Biro drew on the card. Let’s come back to the coding aspect. I’ve got another little bit to read for you here, because I do love the way that this has been written about.

Before I do that, I’ll point out that Georg Nees, Frieder Nake, who are two people I’ve mentioned, were only a small number of a group of pioneers that included folks like Vera Molnár, and they were exploring this aspect of code primarily from a mathematical standpoint, into art pieces.

As we move from the ’60s into the ’70s, we also then start to see more capable output devices. This book, “Tracing the Line”, was published just earlier this year, and covers a variety of contemporary plotter artists. The introduction has a lovely background, and I was just going to read you this part. “The first attempts at generative art date back to the 1960s where no graphic software existed, let alone a screen or computer mouse. Frieder Nake, one of the pioneers of this art form was 25 years old and studying mathematics at the University of Stuttgart, when he began experimenting with a ZUSE Graphomat Z64, a drawing machine that was a predecessor to modern plotters, capable of creating intricate graphics by writing programs.

These programs had to be transferred to punched cards, which were then processed by a big computer, which then came out with a punched tape, which was fed into the Graphomat to create the art pieces”. The ZUSE Graphomat supported the use of four pens. If you actually go and look at Frieder Nake’s work, which, again, you can do in the V&A, you’ll see that he uses four colors. Referring to conceptual art, Frieder Nake repeatedly said that the program itself was the artwork, the execution, the image merely represented the surface. This is where the backlash comes in. We’ve got this group of young 25-year-old mathematicians starting to use computers to generate things that people are getting interested in. Traditional art world, “Witchcraft. Computers are not thinking machines. They must not be allowed to create art”.

Harold Cohen (Traditional Artist), and the AARON Program

There was a traditional artist, though, a British artist, in fact, called Harold Cohen. He was an existing artist. He had a career as a painter. He started to think about how computers could be applied to his existing practice. He taught himself to code, like me, actually, which is quite a nice parallel. He started to consider the choices that could be taken away from his artistic practice, like where to put the color or how to arrange lines by giving those choices to a program or a computer. He continued to impose rules on the output that the computer created. Cohen moved to the University of California in 1968, and he went through this transitional period in the ’60s and ’70s, where he was moving from working with mixed media, which is what he had previously done, to co-creating with the computer.

What you can see here is a detail of a piece that’s printed on a dot matrix piece of paper. I’ve taken a closer up picture so you can see here on the right-hand side, it’s a pattern of numbers printed in a diamond shape, and then he’s gone in with a felt tip and drawn over clusters of those numbers to create the artwork. n 1970, there’s a remarkable sequence where Cohen uses code to draw these shapes, label the shapes with colors that he’s going to apply. He’s using some rules in the program to determine where the colors may go, which colors may or may not touch one another. Then, on the top right, you can see where he’s inked that in.

The final piece at the bottom there is an artwork that he’s created in acrylic. He’s thinking about, all the time as he’s doing this, as he’s moved to California, starting to work with the systems they had and give up more of his artistic practice. He started to formulate this idea of a program to co-create with him. He asked this question, what are the minimum conditions under which a set of marks, functions as an image? Boiling it all the way back to the very basics, at what point is this an image?

He created a program, and he called this program, AARON. He built machines that enabled AARON to do the drawing, as well as showing where things should go. One of these, interestingly, was a turtle. Jeremy famously immortalized on the packaging for the turtle that the BBC had for the BBC Micro. I grew up in the 1980s. There was a programming language called Logo, that you could run on the BBC Micro. You had a little digital turtle, and you gave it instructions like forward and left, and it would move around. Harold Cohen’s turtle moved around the floor of the gallery, and drew shapes and painted, which was very cool, I think. I love this unknown connection, until I did the research. AARON started out written in C.

Cohen rewrote it into Lisp because he got frustrated with C’s limitations. By the mid-1990s, Harold Cohen was giving the program the ability to paint for him or apply the paint for him as well. You can see it’s a bit splotchy, but this was, I think, from 1995. This was mind blowing to me, because just before Christmas, so around November last year, I went to an exhibition and a workshop with a lady called Licia He, who is a contemporary plotter artist. She has got an AxiDraw, but she has converted it not to drive a pen around, but to actually move around and dip a brush into paint and draw and apply the paint and ink to the paper.

That’s a bit more complicated than a pen, because you need to know how much paint to load up on the brush. You need to know how much you can apply before the paper goes soggy and wears through. You need to be able to wash the brush in between colors. I thought this was brilliant and brand new, and it is brilliant. Licia is a phenomenal artist, but not new. This has happened before. This is actually Licia’s setup that I saw.

If you Google, Harold Cohen, AARON, then you will find that AARON is often called an artificial intelligence. I think that’s a very interesting thing to stop and think about, given the current hype cycle that we’re caught up in. Cohen said this, “If what AARON is making is not art, what is it exactly, and in what other ways beyond its origin, other than its origin, does it differ from the real thing? If it is not thinking, what exactly is it doing?” You can go and look at Harold Cohen’s work. There is a small gallery called the Gazelli Art House over in Mayfair, just close to Green Park station. You can walk in, you can look at some of the original pieces of work. If you’d like to see his work with AARON, that’s currently on display in New York at the Whitney by the High Line, the American Art Museum there.

How to Be Creative

I’m going to have to skip through 20 years of computer art. I did want to give you that background about the 1960s and 1970s, and give you a sense of what’s been happening, probably outside of most of our domains. Let’s get back to being creative today, and round out by looking at how we can be creative. A pen plotter is not an inkjet. An inkjet or a laser printer is going to reproduce things for us. We know how annoying printers are today. It’s probably the most annoying pieces of technology in most people’s lives. You’re going to need a plotter. We’ve talked about several types. There’s BrachioGraph. There’s polargraph. There’s the x, y. You’re going to need some line art. There’s software that enables you to convert bitmap to vector art. There’s a piece of software called DrawingBot that’s written in Java. It’s desktop software. You’re going to need some materials.

Materials is where things get really exciting, because you can choose the different weights and textures and colors of paper. You can choose whether you want to use fineliners, or fountain pens, or sharpies, or metallic ink to whatever you create. As you build up those lines on your piece of art, as you watch the plotter move and build up the ink, every single pen stroke is unique. It’s a little bit unpredictable every time exactly how it’s going to lay down on the paper. This is a piece I made for the art show last November, and this is all printed on cotton rag paper. This is a Cistercian numeral. The Cistercian monks in the 13th century had this numeric system that enabled them to put any number as a single character like this, depending on where the lines appeared on the stave. This number is 1984. I drew this using a plotter, using sepia ink on this cotton rag paper. The cotton rag paper is not even.

A plotter really wants your surface to be completely flat. This one requires a bit of babysitting, because if you’re not fully flat, then you might end up dragging, and things like that. It’s really quite an interesting, tangible process. Growing up in the 1980s, I occasionally saw a plotter in an office or at school, but then they fell out of use, and along came inkjets and laser printers, and we all got them at home. That means nobody wants them anymore, so you can get them on eBay. This is my current project. It’s a Roland DXY-1100. It’s an A3 plotter, and it takes up to eight different colors of pen. You need to figure out how to plug it into your modern computer. This uses a 25-pin serial interface. You need to wire that up to a USB.

Somebody’s fortunately written a Python library that lets you talk to this, which is very handy. The little adapters on the left-hand side are purple. I’ve had to print adapters for the pens. This was the first example of the big ink manufacturers trying to lock us in to their devices, so that you got these tiny, little stubby pens that would only go in these plotters. Now I’m 3D printing my own adapters for modern pens.

Code = Art

Am I a technologist, am I a historian, or am I an artist? I’m a coder. I write code. I’m sure many, if not most of us, do the same. Code can transform art. Code can create art. Code can be art. Computers and technology have, since the ’60s, at least, been at the heart of tension in our society, and certainly with the art world. The nature of art, I think, to me, is to comment on that tension and the things that we see in our society and the experiences we have. Isn’t tension an element of art? There’s a link, andypiper.url.lol/wita. It’s just a simple little page. There’s a lot of links on there, if you want to go and explore any of the elements I’ve spoken about.

Conclusion

This is something I plotted out. It sits on the wall inside the studio. It’s Georg Nees’ artist statement from his 1972 portfolio in Montreal. “Computer art is sort of artificial genetics. Its DNA is on punched cards. Information originally emanating from the brains of programmers, yet to be mutated and augmented in complex ways by dice-gaming computers, emerging finally into the environment of rejecting, and/or, as one may observe, promoting culture”. We live in an amazing time. Jeremy said, we think we’re in the technology business, but we’re actually in the people business. I love that. We are physical beings. That’s really important as well.

We live at a time of ephemerality, and fleeting digital moments. A single line of code might not be unique, a few lines of code might be more unique, but every stroke of a pen or a brush is unique. Making something digital is fun, but mostly ephemeral. Making something tangible and physical has the opportunity to endure. Go create. Go take the opportunity of all of the open-source software and hardware and beautiful code that we have, and make something wonderful.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Releases PaliGemma 2 Vision-Language Model Family

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Google DeepMind released PaliGemma 2, a family of vision-language models (VLM). PaliGemma 2 is available in three different sizes and three input image resolutions and achieves state-of-the-art performance on several vision-language benchmarks.

PaliGemma 2 is an update of the PaliGemma family, which was released in 2024. It uses the same SigLIP-So400m vision encoder as the original PaliGemma, but upgrades to the Gemma 2 LLM. The PaliGemma 2 family contains nine different models, combining LLM sizes of 2B, 9B, and 27B parameters with vision encoders of 224, 448, and 896 pixels-squared resolution. The research team evaluated PaliGemma 2 on a variety of benchmarks, where it set new state-of-the-art records, including optical character recognition (OCR), molecular structure recognition, and radiography report generation. According to Google:

We’re incredibly excited to see what you create with PaliGemma 2. Join the vibrant Gemma community, share your projects to the Gemmaverse, and let’s continue to explore the boundless potential of AI together. Your feedback and contributions are invaluable in shaping the future of these models and driving innovation in the field.

PaliGemma 2 is a combination of a pre-trained SigLIP-So400m image encode and a Gemma 2 LLM. This combination is then further pre-trained on a 1B example multimodal dataset. Besides the pre-trained base models, Google also released variants that were fine-tuned on the Descriptions of Connected and Contrasting Images (DOCCI) dataset, a collection of images and corresponding detailed descriptions. The fine-tuned variants can generate long, detailed captions of images, which are “more factually aligned sentences” than those produced by other VLMs.

Google created other fine-tuned versions for benchmarking purposes. The benchmark tasks included OCR, table structure recognition, molecular structure recognition, optical music score recognition, radiography report generation, and spatial reasoning. The fine-tuned PaliGemma 2 outperformed previous state-of-the-art models on most of these tasks.

The team also evaluated performance and inference speed for quantized versions of the model running on a CPU instead of a GPU. Reducing the model weights from full 32-bit to mixed-precision quantization showed “no practical quality difference.” 

In a Hacker News discussion about the model, one user wrote:

Paligemma proves easy to train and useful in fine-tuning. Its main drawback was not being able to handle multiple images without being partly retrained. This new version does not seem to support multiple images as input at once. Qwen2vl does. This is useful for vision RAG typically.

Gemma team member Glenn Cameron wrote about PaliGemma 2 on X. In response to a question about using it to control a robot surgeon, Cameron said:

I think it could be taught to generate robot commands. But I wouldn’t trust it with such high-stakes tasks…Notice the name of the model is PaLM (Pathways Language Model). The “Pa” in PaliGemma stands for “Pathways”. It is named that because it continues the line of PaLI  (Pathways Language and Image) models in a combination with the Gemma family of language models.

InfoQ previously covered Google’s work on using VLMs for robot control, including Robotics Transformer 2 (RT-2) and PaLM-E, a combination of their PaLM and Vision Transformer (ViT) models.

The PaliGemma 2 base models as well as fine-tuned versions and a script for fine-tuning the base model are available on Huggingface. Huggingface also hosts a web-based visual question answering demo of a fine-tuned PaliGemma 2 model.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Nvidia Announces Arm-Powered Project Digits, Its First Personal AI Computer

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Capable of running 200B-parameter models, Nvidia Project Digits packs the new Nvidia GB10 Grace Blackwell Superchip to allow developers to fine-tune and run AI models on their local machines. Starting at $3,000, Project Digits targets AI researchers, data scientists, and students to allow them to create their models using a desktop system and then deploy them on cloud or data center infrastructure.

Nvidia Grace Blackwell brings together Nvidia’s Arm-based Grace CPU and Blackwell GPU with the latest-generation CUDA cores and fifth-generation Tensor Cores connected via NVLink®-C2C. A single unit will include 128GB of unified, coherent memory and up to 4TB of NVMe storage.

According to Nvidia, Project Digits delivers up to 1 PetaFLOP for 4-bit floating point, which means you can expect that level of performance for inference using quantized models but not for training. Nvidia has not disclosed the system’s performance for 32-bit floating point or provided details about its memory bandwidth.

The announcement of Project Digits made some developers ponder whether it can be a preferable choice to an Nvidia RTX 5090-based system. In comparison to a 5090 GPU, Project Digits has the advantage of coming in a compact box and not requiring the huge fan used on the 5090. On the other hand, the usage of low-power DDR5 memory on Project Digits seems to imply a reduced bandwidth compared to the 5090’s GDDR7 memory, which further hints at Project Digits being optimized for inference. However lacking final details, it’s hard to understand how the two solutions compare performance-wise.

Another interesting comparison that has been brought up is with Apple’s M4 Max-based systems, which may pack up to 196GB of memory and are thus suitable to run large LLMs for inference. Here, there seem to be more similarities between the two systems, including the use of DDR5X unified memory, so it seems Nvidia is seemingly aiming, among other things, to provide an alternative to that kind of solution.

Project Digits will run Nvidia’s own Linux distribution, DGX OS, which is based on Ubuntu and includes Nvidia-optimized Linux kernel with out-of-the-box support for GPU Direct Storage (GDS). Nvidia says the first units will be available in May this year.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AJ Styles comments on his recovery from injury, Chelsea Green wants Matt Cardona in WWE

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

– On the October 4th 2024 edition of Smackdown, AJ Styles suffered an injury in a match against Carmelo Hayes. Styles then revealed that he had suffered a “mid foot ligament sprain”. A fan recently asked Styles if he could provide an update on his injury and the response wasn’t as positive as we would have all hoped.

Chelsea Green expressed her desire to see her husband, Matt Cardona, return to WWE. She said “I want to see Matt in WWE, honestly more than anything else, anything else that I even could want out of my career. I feel guilt because first of all, he supports me like no other. He’s so happy for me. He watches everything I do. He’s at shows when I’m winning championships. But at the end of the day, I go home and I know that this was his dream. I joke with you about the fact that I googled how to be a WWE Diva, but he didn’t. He literally came out of the womb wanting to be a WWE Superstar. So I just want him so badly to come back and have that final closure, that ending that he so deserves as, I mean, he was with WWE for a very, very, very long time. I think the fans want it too. Like, I don’t want to speak for anyone, but I just, I get a lot of people asking, you know, when’s he coming back? When’s he coming back? Gosh, I would love, love, love to see him back.“

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.