MongoDB, Inc. Launches Certification Program for Cloud Partners Offering … – MarketScreener

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. announced a new Certified by MongoDB Database-as-a-Service (DBaaS) program that provides cloud infrastructure partners the ability to offer their customers a first-class, fully supported MongoDB DBaaS. Given the worldwide popularity of MongoDB, there are many customers who require a certified MongoDB DBaaS solution for their use cases on specific clouds where MongoDB Atlas is not available. Cloud partners in the program will be able to deploy MongoDB’s industry-leading database and data services?including full-text search and vector search?as a managed service through their own cloud platforms to give their customers a highly reliable and scalable platform for securely building and deploying cloud-native applications.

The Certified by MongoDB DBaaS program also provides cloud partners the dedicated support they need to build deep technology integrations, while opening up joint go-to-market initiatives with the MongoDB Partner Ecosystem to help accelerate their customers’ success. Millions of developers and tens of thousands of customers around the world rely on MongoDB Atlas running on AWS, Azure, and Google Cloud to securely power their business-critical applications. However, many customers have deployment requirements that require the use of other cloud providers but want confidence that when choosing third-party-managed MongoDB services, they will also get the performance, security, and reliability that MongoDB is known for.

The Certified by MongoDB DBaaS program is designed to address this need by enabling cloud partners to offer supported MongoDB technology through a cloud service, while also giving them opportunities to: Access new product features available in MongoDB Atlas: Available later this year, Certified by MongoDB DBaaS cloud partners will be able to offer vector search and full-text search capabilities in their managed services. This allows cloud partners in the Certified by MongoDB DBaaS program to provide their customers the technology to build modern applications without incurring the added complexity and overhead of bolting on other solutions?and to provide developers a more similar experience to using MongoDB Atlas. Collaborate on deep technology integrations with MongoDB and its partner ecosystem: MongoDB’s ecosystem of more than 1,000 partners delivers proven expertise and technology integrations that solve difficult business problems (e.g., data security and governance, application observability, data warehousing and analytics).

By leveraging opportunities to integrate complementary technologies, MongoDB partners enjoy opportunities to reach new customers, expand market presence, and tap into new avenues of revenue generation with joint go-to-market initiatives. Benefit from industry-specific expertise across the MongoDB Partner Ecosystem to help customers build, deploy, and scale modern applications: By joining the MongoDB Partner Ecosystem, organizations can collaborate and leverage their deep industry knowledge to help solve customer problems. MongoDB partners use their expertise to understand customer business needs and to deliver tailored, industry-specific solutions?from financial services, to healthcare, to manufacturing and beyond. Organizations in the MongoDB Partner Ecosystem can work with one another to offer the highly tailored services and knowledge customers require to accelerate time to market with secure, scalable, and high-performing applications.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Announces General Availability of MongoDB Atlas Vector Search … – MarketScreener

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. announced the general availability of MongoDB Atlas Vector Search on Knowledge Bases for Amazon Bedrock to enable organizations to build generative AI application features using fully managed foundation models (FMs) more easily. MongoDB Atlas is the world’s most widely available developer data platform and provides vector database capabilities that make it seamless for organizations to use their real-time operational data to power generative AI applications. Amazon Bedrock is a fully managed service from Amazon Web Services (AWS) that offers a choice of high-performing FMs from leading AI companies via a single API, along with a broad set of capabilities organizations need to build generative AI applications with security, privacy, and responsible AI.

Customers across industries can now use the integration with their proprietary data to more easily create applications that use generative AI to autonomously complete complex tasks and to deliver up-to-date, accurate, and trustworthy responses to end-user requests. The new integration with Amazon Bedrock allows organizations to more quickly and easily deploy generative AI applications on AWS that can act on data processed by MongoDB Atlas Vector Search to deliver more accurate, relevant, and trustworthy responses. Unlike add-on solutions that only store vector data, MongoDB Atlas Vector Search powers generative AI applications by functioning as a highly performant and scalable vector database with the added benefit of being integrated with a globally distributed operational database that can store and process all of an organization’s data.

Customers can use the integration between MongoDB Atlas Vector Search and Amazon Bedrock to privately customize FMs like large language models (LLMs)?from AI21 Labs, Amazon, Anthropic, Cohere, Meta, Mistral AI, and Stability AI?with their real-time operational data by converting it into vector embeddings for use with LLMs. Using Agents for Amazon Bedrock for retrieval-augmented generation (RAG), customers can then build applications with LLMs that respond to user queries with relevant, contextualized responses?without needing to manually code. For example, a retail organization can more easily develop a generative AI application that uses autonomous agents for tasks like processing real-time inventory requests or to help personalize customer returns and exchanges by automatically suggesting in-stock merchandise based on customer feedback. Organizations can also isolate and scale their generative AI workloads independent of their core operational database with MongoDB Atlas Search Nodes to optimize cost and performance with up to 60 percent faster query times.

With fully managed capabilities, this new integration enables joint AWS and MongoDB customers to securely use generative AI with their proprietary data to its full extent throughout an organization, and to realize business value more quickly?with less operational overhead and manual work.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. Announces New Capabilities for MongoDB Atlas to Streamline Building …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. at its developer conference MongoDB.local NYC announced new capabilities for MongoDB Atlas that make it faster and easier to build, deploy, and run modern applications with the performance and scale organizations require. MongoDB Atlas is the most widely distributed developer data platform in the world, and tens of thousands of customers and millions of developers rely on its industry-leading operational database and integrated data services to power business-critical applications across cloud providers. The general availability of MongoDB Atlas Stream Processing makes it easier to use real-time data from a wide variety of sources to run highly responsive applications.

MongoDB Atlas Search Nodes on Microsoft Azure give organizations more flexibility for optimizing the performance and cost of generative AI workloads that drive intelligent applications at scale. MongoDB Atlas Edge Server reduces the complexity of managing data for distributed applications that span locations from the cloud to on premises to devices at the edge. The new MongoDB Atlas capabilities announced enable organizations of all sizes across industries to build, deploy, and run next-generation applications with the security, resiliency, and durability today’s business environment demands: Simplify building highly responsive applications with streaming data: Now generally available, MongoDB Atlas Stream Processing enables developers to take advantage of data in motion and data at rest to power event-driven applications that can respond to changing conditions.

Streaming data?coming from sources like IoT devices, customer browsing behaviors, and inventory feeds?is critical to modern applications because it allows organizations to create dynamic experiences as end-user behaviors or conditions change. However, streaming data is highly dynamic, and inflexible data models are not ideal for building event-driven applications that need to continuously adjust to the real world. Because it is built on a flexible and scalable data model, MongoDB Atlas Stream Processing allows organizations to build applications that analyze data in motion and at rest and make adjustments to business logic in seconds.

For example, organizations can build applications that dynamically optimize shipping routes based on weather conditions and supply chain data feeds, or can continuously analyze financial transaction data feeds and purchase histories for AI-powered fraud detection in near-real time. By using MongoDB Atlas Stream Processing, organizations can do more with their data in less time and with less operational overhead. Optimize the performance and efficiency of generative AI applications: MongoDB Atlas Search Nodes?generally available on AWS and Google Cloud, and now in preview on Microsoft Azure?provide dedicated infrastructure for generative AI and relevance-based search workloads that use MongoDB Atlas Vector Search and MongoDB Atlas Search.

MongoDB Atlas Search Nodes are independent of core operational database nodes and allow customers to isolate workloads, optimize costs, and reduce query times by up to 60 percent. In addition to helping optimize performance and cost, MongoDB Atlas Search Nodes enable organizations to run highly available generative AI and relevance-based search workloads at scale for the most demanding applications. For example, an airline company can use MongoDB Atlas Search Nodes to optimize the performance and scale an AI-powered booking agent experiencing a surge in usage by seamlessly isolating the vector search workload and scaling the required infrastructure?without resizing the required compute or memory resources for their operational database workload.

Deploy applications that seamlessly connect from the cloud to the edge: Now available in public preview, MongoDB Atlas Edge Server gives developers the capability to deploy and operate distributed applications in the cloud and at the edge. MongoDB Atlas Edge Server provides a local instance of MongoDB with a synchronization server that runs on local or remote infrastructure and significantly reduces the complexity and risk involved in managing applications in edge environments. With MongoDB Atlas Edge Server, applications can access operational data even with intermittent connections to the cloud.

For example, a hospital system can use MongoDB Atlas Edge Server to help enable applications running on patient healthcare devices to remain functional during power outages and connectivity disruptions. With Atlas Edge Server, their data will automatically synchronize once connectivity is restored. MongoDB Atlas Edge Server also supports data tiering to prioritize the synchronization of critical data to the cloud, reducing network congestion.

And, MongoDB Atlas Edge Server maintains a local data layer to reduce latency and enable faster actions based on real-time data. With MongoDB Atlas Edge Server, organizations can seamlessly run highly available, modern applications closer to end-users with less complexity. MongoDB customers welcome new capabilities to help them build modern applications with less complexity Acoustic is a customer-obsessed marketing technology company committed to creating powerful tools that are easy to use.

“At Acoustic, our key focus is to empower brands with behavioral insights that enable them to create engaging, personalized customer experiences,” said John Riewerts, EVP of Engineering at Acoustic. “With Atlas Stream Processing, our engineers can leverage the skills they already have from working with data in Atlas to process new data continuously, ensuring our customers have access to real-time customer insights. Meltwater empowers companies with a suite of solutions that spans media, social, consumer and sales intelligence.

By analyzing ~1 billion pieces of content each day and transforming them into vital insights, Meltwater unlocks the competitive edge to drive results. “MongoDB Atlas Stream Processing enables us to process, validate, and transform data before sending it to our messaging architecture in AWS powering event-driven updates throughout our platform,” said Cody Perry, Software Engineer at Meltwater. “The reliability and performance of Atlas Stream Processing has increased our productivity, improved developer experience, and reduced infrastructure cost.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Enhancing Developer Experience for Creating Artificial Intelligence Applications

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

For one company, large language models created a breakthrough in artificial intelligence (AI) by shifting to crafting prompts and utilizing APIs without a need for AI science expertise. To enhance developer experience and craft applications and tools, they defined and established principles around simplicity, immediate accessibility, security and quality, and cost efficiency.

Romain Kuzniak spoke about enhancing developer experience for creating AI applications at FlowCon France 2024.

Scaling their first AI application to meet the needs of millions of users presented a substantial gap, Kuzniak said. The transition required them to hire data scientists, develop a dedicated technical stack, and navigate through numerous areas where they lacked prior experience:

Given the high costs and extended time to market, coupled with our status as a startup, we had to carefully evaluate our priorities. There were numerous other opportunities on the table with potentially higher returns on investment. As a result, we decided to pause this initiative.

The breakthrough in AI came with the emergence of Large Language Models (LLMs) like ChatGPT, which shifted the approach to utilizing AI, Kuzniak mentioned. The key change that LLMs brought was a significant reduction in the cost and complexity of implementation:

With LLMs, the need for data scientists, data cleansing, model training, and a specific technical infrastructure diminishes. Now, we could achieve meaningful engagement by simply crafting a prompt and utilizing an API. No need for AI science expertise.

Kuzniak mentioned that enhancing the developer experience is as crucial as improving user experience. Their goal is to eliminate any obstacles in the implementation process, ensuring a seamless and efficient development flow. They envisioned the ideal developer experience, focusing on simplicity and effectiveness:

For the AI implementation, we’ve established key principles:

  • Simplicity: enable implementation with just one line of code.
  • Immediate Accessibility: allow real-time access to prompts without the need for deployment.
  • Security and Quality: integrate security and quality management by design.
  • Cost Efficiency: design cost management and thresholds into the system by default.

Kuzniak mentioned that their organizational structures are evolving in the face of the technology landscapes. The traditional cross-functional teams comprising product managers, designers, and developers, while still relevant, may not always be the optimal setup for AI projects, as he explained:

We should consider alternative organizational models. The way information is structured and its subsequent impact on the quality of outcomes, for example, has highlighted the need for potentially new team compositions. For instance, envisioning teams that include AI product managers, content designers, and prompt engineers could become more commonplace.

Kuzniak advised applying the same level of dedication and best practices to improve the internal user experience as you would for your external customers. Shift towards a mindset where your team members consider their own ideal user experience and actively contribute to creating it, he said. This approach not only elevates efficiency and productivity, but also significantly enhances employee satisfaction and retention, he concluded.

InfoQ interviewed Romain Kuzniak about developing AI applications.

InfoQ: How do your AI applications look?

Romain Kuzniak: Our AI applications are diverse, with a stronger focus on internal use, particularly given our nature as an online school generating substantial content. We prioritize making AI tools easily accessible to the whole company, notably integrating them within familiar platforms like Slack. This approach ensures that our staff can leverage AI seamlessly in their daily tasks.

Additionally, we’ve developed a prompts catalogue. This initiative encourages our employees to leverage existing work, fostering an environment of collective intelligence and continuous improvement.

Externally, we’ve extended the benefits of AI to our users through the introduction of a student AI companion for example. This tool is designed to enhance the learning experience by providing personalized support and guidance, helping students navigate their courses more effectively.

InfoQ: What challenges do you currently face with AI applications and how do you deal with them?

Kuzniak: Among the various challenges we face with AI applications, the most critical is resisting the temptation to implement AI for its own sake, especially when it adds little value to the product. Integrating AI features because they’re trendy or technically feasible can divert focus from what truly matters: the value these features bring to our customers. We’ve all encountered products announcing their new AI capabilities, but how many of these features genuinely enhance user experience or provide substantial value?

Our approach to this challenge is rooted in fundamental product management principles. We continuously ask ourselves what value we aim to deliver to our customers and whether AI is the best means to achieve this goal. If AI can enhance our offerings in meaningful ways, we’ll embrace it. However, if a different approach better serves our users’ needs, we’re equally open to that.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JetBrains IntelliJ IDEA 2024.1 Delivers Support for Java 22 Features

MMS Founder
MMS Johan Janssen

Article originally posted on InfoQ. Visit InfoQ

JetBrains released IntelliJ IDEA 2024.1 featuring support for Java 22 features, OpenRewrite, WireMock server, the Maven Shade Plugin and full line code completion for Java and Kotlin.

The new release supports Java 22 features such as statements before super(), String Templates and Implicitly Declared Classes and Instance Main Methods. Code written in other languages can be injected into Java’s String Templates by using annotations or via Alt+Enter.

The log output has been improved and allows users to click on a certain log message to open the code which generated the log message. This can be configured via Settings/Preferences | Advanced Settings | JVM languages. IntelliJ will also automatically add a Logger initialization statement when typing log.

Project indexing has been improved and IDE features, such as code completion and syntax highlighting, are now available during indexing for Java and Kotlin. IntelliJ now parses the pom.xml files to construct the project model in order to create the project structure in seconds while the rest of the project is built in the background.

The rename feature in the Maven Shade Plugin is now supported and IntelliJ highlights the code and provides navigation.

When adding a breakpoint on a line with multiple statements such as Lambda functions, the IDE displays inline breakpoints which can be enabled individually.

IntelliJ IDEA now shows which statements aren’t completely covered by tests including the specific branch of the code which wasn’t covered. The feature is enabled by default and may be configured via Settings/Preferences | Build, Execution, Deployment | Coverage.

A new terminal is available as a beta feature and may be enabled via Settings/Preferences | Tools | Terminal | Enable New Terminal. Currently Bash, Zsh and PowerShell are supported and work is ongoing to support other shells. The terminal offers command completion which supports paths, commands, arguments and options. The new command history makes it easier to filter and find commands. The new UI displays each command in a separate block. Navigating between the blocks is possible on Windows and Linux via Ctrl+↑ / Ctrl+↓ and on macOS via ⌘↑ / ⌘↓. Anastasia Shabalina, Product Manager at JetBrains, wrote this blog post describing all the new terminal features including examples.

Sticky lines in the editor keeps class and method definitions fixed at the top of the editor while scrolling to provide a better context overview.

This release makes it possible to zoom the entire IDE via View | Appearance and then select Zoom IDE.

The Kotlin K2 mode has been introduced for improved performance and better stability during code analysis, using the embedded K2 Kotlin compiler.

JetBrains IntelliJ IDEA 2024.1 uses the official Kotlin style guide for all projects unless overwritten. Next to that, static imports are now kept while copy pasting.

Support for the Scala 3 syntax, autocompletion and debugger has been improved.

While the previous features are available in every IntelliJ edition, the new features in the following paragraphs are only available in the IntelliJ IDEA ultimate edition.

Full line code completion for Java and Kotlin, which suggests entire lines of code based on the context analysis of the file, uses advanced deep learning models running locally without sending code over the internet.

The new AI Assistant requires a JetBrains AI subscription and should be installed as a separate plugin. After installation, the AI assistant can be used for code completion, generating tests and generating commit messages. Code highlighting is now enabled in the AI Assistant responses just like in the editor.

This new version supports OpenRewrite to refactor code and, for example, upgrade to a new version of Java or a new version of a framework.

The WireMock server is now supported via a plugin and offers schema completion for JSON configurations, generation of WireMock stub files and the possibility to run the stub on a server from the editor.

The latest version of IntelliJ IDEA is now available on the JetBrains website and via the JetBrains Toolbox App.

A complete overview of all the changes for IntelliJ IDEA 2024.1 may be found in the What’s New in IntelliJ IDEA 2024.1 article, while the What’s New in IntelliJ IDEA 2024.1 video provides a brief overview of the biggest new features. Also, Mala Gupta, Developer Advocate at JetBrains, described support for the various Java 22 features including examples in a recent blog post and this JetBrains webinar.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: The Creative Act: How Staff+ Is More Art Than Science

MMS Founder
MMS David Grizzanti

Article originally posted on InfoQ. Visit InfoQ

Transcript

Grizzanti: My name is Dave Grizzanti. I’m a Principal Engineer at The New York Times. I’m going to be talking about how staff-plus is more like art than science. When you see this word, I want everybody to metaphorically close their eyes and think about a picture what you see, and what you think of when I say the word artist. I’m guessing you thought of something like this, or a person like this painting, maybe singing, creating something, a drawing maybe. You probably didn’t think of this, or you probably didn’t think of an engineer off the top of your head, or somebody making something mechanical. One of the things I want to do today is challenge that idea. This is a picture I asked DALL·E to create, of an engineer artist, because I was trying to find a picture that would show somebody who is an artist doing engineering, and I was having a tough time. I asked DALL·E to do it. The picture doesn’t look all that realistic. I think it shows the point of somebody typing on a keyboard, and has paint all over.

I work in an organization on a team called, delivery engineering, specifically within developer productivity. Within that department we’re focused on building an internal developer platform for all engineering at The New York Times. At The Times, our mission is to build the essential subscription bundle for every English-speaking curious person who seeks to understand and engage with the world. This talk is really about staff-plus, staff-plus roles, and my experience in that role. As the title said, how it’s more like art than science. I wanted to spend a few minutes going through my background, because I think it’s relevant to what I’m going to talk about. I’ve been in the industry for about 17 years since 2006. Unlike everyone else in the staff-plus track, I don’t currently manage people, and I’ve never managed people. I don’t know if it’s completely unique. I’ve been in the industry a while and I’ve never taken the management plunge. I’ve been fortunate to work in organizations that have had a substantial career ladder, or at the point when I needed to make a decision, that kind of decision existed for me to continue on the IC track.

What I’m going to discuss is some of the challenges I’ve had in my role at the various companies. Some of the challenges I think a lot of staff-plus people have, through conversations I’ve had with friends, or just being at conferences like this, and how I see some of those challenges being shaped by the different companies we’ve worked at and the roles we’ve had and how you can overcome them. This is a picture of two different tracks, parallel career paths with a traditional management, one on the left, and one with a split on the right. It’s good that companies have this now. Maybe your company has something like this. Audrey talked about a slightly different model of not using progressive titles like this, because so many companies have different versions of it. My last two companies, or the last one and the current one have this, but they’re variations on it. My previous company, principal was lower than staff. We didn’t even have staff. In some ways, using these exact phrasings don’t necessarily help. You may have seen something like this at your job, or maybe you have one of these titles. Within this staff or principal or staff-plus roles, that’s really just a title, but it doesn’t usually capture what someone’s doing day-to-day, or maybe what sub-job they have.

Staff Archetypes

Will Larson’s “Staff Engineer” book, came out a few years ago and I think it became a guide for a lot of people in these roles who were looking to grow into it. He defines a few different staff archetypes. I want to go over each of those. They are tech lead, architect, solver, and right-hand. Let me just do a quick definition of these to level set. The tech lead guides the approach and execution of a particular team. They partner closely with a single manager, but sometimes they may partner with two or three managers within a focus area. The architect is responsible for the direction, quality, and approach within a critical area. They combine in-depth knowledge of technical constraints, user needs, and organizational level of leadership. The solver digs deep into arbitrarily complex problems and finds an appropriate path forward. Some may focus on a given area for long periods. Then, lastly, the right-hand extends executives attention, borrowing their scope and authority to operate particularly complex organizations.

Which one are you, something else? Outside of the archetypes, I’ve found that our roles can be very vague and are often shaped by personal experience and organizational dynamics. If you’ve been at a company for a long time, and you’ve grown within that company, you’re probably filling a much different role than if you were hired from outside of the company. You don’t have organizational context, or historical knowledge of legacy systems. That can play a big part in what you’re doing day-to-day. I think my job at Comcast was very different than my job at The Times now, because when I was hired, I’m at the top of the IC track, but I don’t know much about The Times or its systems. The problems that I think people are looking for me to solve are things that I can help with that don’t need that historical context. I think I have traditionally fallen into the tech lead, architect role. We’ve level set. I’ve given the definition of staff-plus, some archetypes.

Life of an Artist

The idea for this talk came up from a book I read recently called, “The Creative Act: A Way of Being,” by Rick Rubin. He’s a famous music producer, founding a lot of really popular famous bands with a wide like eclectic mix of music, Beastie Boys, Run DMC, and Metallica. His history of that music has changed over time as he’s grown and gotten older. This book is not about music at all, or about his career. It’s about being a creative person and what it takes to get into that mindset and be creative. I picked up this book because I was feeling like I was stuck in a rut with just work and not being creative enough and feeling a pull to be more creative. I think engineers, or the traditional engineer struggles with that. We tend to be very tactical, and critical thinkers, which is good, but we don’t necessarily let our minds wander and be creative. Between the book and the interviews I’ve listened to with him, I want to draw on some of the lessons he shared for a more artistic life, that I think we can use to inspire and get better at our jobs.

Let’s talk a little bit about the life of an artist. This is a quote from Rick, “To live as an artist is a way of being in the world. A way of perceiving. A practice of paying attention. Refining our sensitivity to tune into the more subtle notes. Looking for what draws us in and what pushes us away. Noticing what feeling tones arise and where they lead.” What I want to take away from that, or what I want to talk about is how everyone is a creator. I don’t think we think of ourselves as artists, going back to that example I showed earlier with the picture. I want people to put away the idea that you’re not creative or a creator. Everything we do in our lives, including writing software is creative, or writing is creative, even if it’s documentation. We’re not just talking to technicians. Art and engineering are both forms of experimentation. We have to embrace ambiguity, perform discovery, and craft a solution. That could either be to an engineering problem, or it could be to a painting.

Active Listening

The next topic I want to talk about is active listening, or this idea of having a sensitive antenna or transmission. Many of us are just really waiting for our next chance to speak instead of really listening. I find myself having this problem a lot, especially when the pandemic started in 2020 and we were all home doing virtual meetings. It’s very easy to be distracted with multitasking.

We definitely have a culture of lots of meetings, and people needing to get work done, at the same time as you’re in a meeting. It’s hard to just be present and listen. I think being a staff engineer means putting your team and your project first. It’s not necessarily about specifically what you’re contributing, it’s about something bigger. Your role is to help drive progress, help make technical decisions, and keep open lines of communication. Let’s talk about how to make this better, make better use of your time, and spend time with your teammates and learn how to listen more. I mentioned this idea of having a more sensitive antenna. This is something that Rick talks about a lot in the book, and living a life of awareness and always being on the lookout for clues. Recently, I was reading “The Staff Engineer’s Path,” by Tanya Reilly. Her and Rick both talk about this idea of when you’re doing something that you’ve done all the time, it can be easy to be on autopilot and you don’t notice things. In Tanya’s book she talks about during the pandemic being in the Irish countryside or walking around with friends and how they would be less distracted, and have this more sensitive antenna, and pick up on subtleties outside. Like, “This rock is so pretty, or this tree has these unique qualities.” I think we get caught in this rut where we’re not paying attention to what’s going on behind us, we’re just on autopilot. Not really being on the lookout. Rick talks about this idea of improving the sensitivity of your antenna by a few ways. One is meditation. Meditating is one way of increasing your awareness of your surroundings and just being more attune. The other thing is, just being mindful of your surroundings. Quieting your mind and letting signals break through, to live in this constant state of awareness.

Beginner’s Mindset

The next thing is beginner’s mindset. This is the idea of approaching problems as if you have no knowledge or experience in a given area. I think years spent gaining experience and honing your craft can often act counterintuitively. It will limit the scope of your imagination, creativity, and focus. A few examples of this, I’m sure everyone’s heard of the beginner’s luck analogy. I’ve played board games a lot. I’ve played board games with friends, where I think I’m the expert and should be winning, and then a friend comes over and plays, he’s never played before, and they win. They beat everybody at the game. You’re like, how did you do this? I think the idea of somebody coming in with a fresh perspective, not necessarily with all the historical context, can often see through some of the complexities and look at it with fresh eyes and sometimes a better outcome. I think within our discipline within engineering, there’s a common problem with the second system effect, where the tendency of small, elegant, and successful systems to be succeeded by over-engineered systems that are bloated due to inflated expectations and overconfidence of the engineers. The confidence from the first system’s success leads to start a more complex one that maybe is beyond the abilities of the engineer. Unconsciously the developer’s mind is full of additions that may not have been there in the first project, and will cause this bloating and unnecessary complexity. In the beginner’s mind, there are many possibilities, but in the expert’s, there are a few. I think, when we become experts, we have such a narrow scope of how we see problem solving that it kind of blinds us a bit. Over years of deliberate practice and execution, we gradually notice recurring patterns which our minds unconsciously form into shortcuts, rules of thumb, and best practices. In time, these mental cheat codes can hew ever-deepening grooves into what was once a wide-open pasture, walling us into fixed mindsets. Developing a beginner’s mindset is all about freeing ourselves from this preconceived notion of expectations about what should happen next. By doing so, we reduce the risk of stress or disappointment. This mindset often helps artists unblock creativity, start fresh, and develop new ideas, get outside of their own heads. It can also help engineers and less creative types as well.

Improving/Avoiding Traps

Next, let’s talk about a few ways to avoid the common traps and hone our beginner’s mindset. Ask questions and listen. What if our assumptions are wrong despite the best evidence at hand? What if solutions drawn from our past are no longer relevant? It goes back to that idea of attuning your antenna and listening more. The next thing is, go slower. We tend to operate on autopilot in areas where we have the most knowledge and experience. This can take us out of the optimal discovery process and cause us to miss steps. Consider answers as more middle ground, and dogmatic absolutism is the opposite of an open-minded curiosity. Avoid pre-judgment. Can you really know how something will work out if you’re too focused on how you believe things should work? Detach from expert mode. Attachment to expert identity, traps us into offering answers before crafting questions. One of the other ways is work with a beginner, whether that’s someone new to the field, or someone who’s been in the field for a while, but maybe doesn’t have expertise in the area you’re working with. I think practicing exercises or digging into problems with somebody who’s a beginner can oftentimes break you out of that mindset. Starting a new job at a new company is also a great way of doing that, because you come in and you don’t know anything so you’re forced to think backwards. Not that I’m saying everybody should start a new job every few years, but it’s a good way to give you a different perspective.

Writer’s Block

The next topic is writer’s block, or just getting stuck on a problem. Architecture, staffing issues, team dynamics, artists commonly face these challenges, writer’s block, or whatever painters would call writer’s block. You often hear of artists wanting a muse. I don’t know if we have muses in software engineering. I think we can change up our environments to stir up ideas. Let’s talk about a few ways to get around this form of writer’s block that I think are similar to artists. For me, I often feel overwhelmed by how large tasks can seem at first, so I like this idea of taking small steps. This talk, for instance, was like, I have this idea of riffing on this art book, and that was it. I was like, how do I get from there to talking for 45 minutes? For me, I break that up into more manageable chunks and force myself to schedule it into my day. For this talk, what I did was I’m going to write 5 minutes of content every day for the next 5 days. If I did that, then I’ll have 25 minutes by the end of the week. That actually worked out really well, just spit out as much stuff as you can on a piece of paper. I think for writers, this is often a suggested paradigm for writing. As you’re writing, don’t try to write and edit at the same time, because that can make you feel stuck and get put in this loop. Just focus on one thing, write as much as you can, and come back and edit it later. I think taking these small steps toward a larger goal can give you a sense of accomplishment each day and make the larger tasks seem less overwhelming. The next thing is, change your environment and eliminate distractions. This picture really resonated with me when I was looking it up, because I definitely found myself back in 2020, when I was working from home, being like, I need as many monitors as I can, I need an iPad over here. I’ve gone the complete opposite of that in the last year and a half, and I have one monitor in front of me and a keyboard. Least amount of distractions possible. I think even working from home, if you don’t have a dedicated space, or even if you do, there’s a lot of distractions, like chores, your kids coming up, your pets. I think trying to eliminate some of those is very helpful when you need to focus and break out of whatever you might be being stuck with. I mentioned this picture resonated with me because the person is looking at their phone while also looking at their computer.

The next thing was, go for a walk. Again, change your environment. For me, I try to do this at least once a day. I don’t have this nice of a nature to walk around with, but getting up and going outside. I try not to take my phone with me or look at it. One thing I’ve found very helpful is if I’m just walking, and I think of ideas, I’ll just write it down real quick on my phone and then put it away. I’ve thought about even taking a pen and paper with me so I’m not using my phone to get distracted by Slack. I think going for a walk and just stepping away from work for 20 minutes, 30 minutes, gets you out of that mindset and can really help generate ideas, on the problem you’re contemplating or just other things you might need to think about. Then the last one, which is maybe a little bit more severe. Cal Newport wrote this book called, “Deep Work.” In the book, he tells the story about an author who was having trouble getting his book done, so he booked a round trip business class ticket to Tokyo. He wrote during the whole flight to Tokyo. Then he got on and wrote the whole way back. When he got back, he had his finished book. The reason that this story resonated with me is that at the time a couple years ago, when I would fly, I would actually have that feeling like, I’m free of distractions. No one can call me, can’t do anything, this is the perfect time to read or get work done. It’s interesting that flying usually is not a relaxing experience, but being in that environment where you’re almost forced to sit there, you can’t leave, no one can call you. It’s a great way to force yourself to focus. I’ve been trying to find ways of like, how do I get myself into this same mode of how I would feel on a plane, but being at my house without spending lots of money? The other idea with this plane ride was that if you make a big financial investment in doing something, you’re also more likely to force yourself to do the work. Because not only are you stuck on a plane, but you spent a lot of money to fly there just to write the book. By playing with these ideas of making a commitment or changing up your environment to unblock yourself.

The next one is alternative mediums. This is a little harder, I think, in the last couple years with being remote and not having whiteboards. Another trick I think I like to use for myself or with teams is just to change the way you’re talking about a problem, whether it’s like write it down versus talking out loud, or just draw a picture. That could be an architecture diagram, but it could also be a flowchart or anything that gets out of the problem of people talking past each other with using words. Languages can be different between different cultures, different dialects in different languages. Drawing sometimes breaks away from those problems, and clear a better path. Oftentimes too, I think drawing with somebody else, or even myself is similar to like rubber ducking in programming, or rubber ducking in general where you’re just using the other person to lay out your idea. Then it helps you get the ideas flowing, even if you don’t care what they say. Maybe you find somebody that you tell them like, “I don’t want any feedback on this idea. I just want you to sit there and let me draw this out, to listen to what I’m doing.” Then lastly, is be an anthropologist. A funny story of this picture. I was trying to find a picture of an anthropologist online, which is very hard. It had to be Creative Commons or something I could use. I had a picture of archaeologists, which is a very different field, but I was going to try to make it work. Then I was traveling last week, and I saw this sign on a building. I was like, I’m going to take a picture of this and use in my talk. Another area I often see folks struggling with is when they start at a new job, or even if they’ve been in a job for a long time, not necessarily understanding organizational dynamics and culture. I think this is one area where if you think like an anthropologist does of like studying social behavior of current and previous civilizations, and apply that to your company or your job, it can help you break through some of those problems. It takes work, but meeting people in different parts of the company, or reading historical documentation about old documents that the company has done, or put out will help you understand motivations, whether they be cultural or business-oriented things. I’ve taken on this task for myself to meet as many different people as I can and study the company to help me understand what they’ve done. I’ve found that when talking with more junior engineers, or folks that I’ve mentored in the past is that this is something that they’ve always been like, “How do you know so much about x thing that this other team is doing? No one told me what’s going on.” I’ve often said to them, here’s a good way of learning some of this stuff. Seek some of this information out yourself or chat with somebody you meet who’s in a different team, it’ll give you a good perspective on what’s going on without the company or in the company. That wraps up, I think, some of the art topics.

Staff-Plus Challenges

Let’s talk about some of the challenges I think engineering side with staff-plus, and maybe how we can use some of these artistic references to help us do our jobs better. Leading without authority I think is something that we often talk about in our industry, especially as individual contributors that are not in management positions, and don’t necessarily have what we traditionally consider authority. You want someone in your team and another team to work on something with you or for you, or you want even other engineers within a team that you may be leading, but not necessarily from a traditional management perspective. How do you get them to work with you? There’s a book by Keith Ferrazzi, called “Leading Without Authority,” that’s very popular. It touches on lots of different topics within this arena. One of the things he talks about is co-elevation, which I’ll build on. Really, this idea of just relying on other people within the company, other folks that you know, to get things done without the traditional hard or soft power differences. Being like, soft power is the ability that you can co-opt rather than coerce. Coercion is not necessarily the best way to get people to do things for you. It’s not going to gain you any favors. You really want to build relationships with people and work together. This idea of co-elevation that Keith talks about is a more mission-driven approach to collaborative problem solving, through fluid partnerships and self-organizing teams. When we co-elevate with or more of our teammates, we turn them into friends or peers. How to establish relationships in order to build these self-organizing teams and co-elevate. We talked earlier about this idea of anthropology, being an anthropologist, to learn the inner workings of your organizations and various teams. Use the exercise, use meeting people to build those relationships with different folks across the org, build trust and credibility with them. I found through my last couple jobs that being approachable, and just being friendly is actually amazing for building these relationships. I think some people got to put off some of that friendliness, but showing people that you’re approachable and willing to work with them, and not putting your foot down with different ideas can really help. When you’re doing these, talking with people, doing these interviews, you should develop a set of questions you can use to ask folks during the time you have with them. This will help set the tone for the interaction. Keep it light to make it not seem like an interview. Your goal here is to listen, not talk, be an active listener.

Next, I want to go over what some of those questions would be or how to frame this interview. The three sections are get context, solicit feedback, and ask for advice. Within the context section, you could ask things like, tell me about yourself. What do you do for the team or the company? What does a typical day look like for you? From the feedback side, what works really well for you or your team? What’s most frustrating? Then some advice. What did you learn as you were coming up to speed at the company or recently on a project? Are there any pitfalls I should be aware of? If you use these set of questions to establish your relationships, keep the ones you think you got valuable answers for. Stay in touch with those people every few months or so, and keep the relationship active. It can also help as you’re approaching these questions or approaching these interviews, to have a beginner’s mindset. Don’t go into them with any preconceived notions based on the person’s role, or position in the company. Somebody who may be in a more junior position will have just as much to say, as somebody who may be a VP. Keep the perspective of roles really matter, out of the equation.

The next thing, building on this topic of leading without authority, is establishing solid relationships with your peers, not just necessarily people who you would look up to that are in a higher position from you. If you happen to be in an organization that doesn’t have a lot of other folks in your same role, this may be a little bit of a struggle. You may be in a big company that has a lot of staff, or staff-plus engineers, or you may be in a company where you’re the only one. Some ways to do this is look outside of your immediate team, as you’re doing those interviews, or just meeting people on whatever chat app you use. The next thing is look outside your company. There’s a popular leadership Slack channel called Rands, that you can find other people from lots of other companies there. I’ve chatted with a lot of other staff engineers from other companies. It’s a great way to learn what they’re doing and some of the challenges that they have, not necessarily meeting people in-person or coming to conferences. I’d encourage people to maintain and nurture those relationships with the people that you’ve met. I have a peer group that I meet with regularly, who now work across various companies. It’s a great resource for bouncing ideas off of, keeping yourself up to date with challenges that other companies are facing and not just being so insular with the company you’re in now. Going back to the anthropology idea, this exercise, the interviews and maintaining these relationships. I’ve met lots of other friends doing this and stayed in touch with people both professionally and personally. Even after you’ve both left your companies or jumped two or three times, I think this can be really helpful.

I want to talk about mentorship a little bit. I think most people think of mentorship as, I would like to find a mentor who can teach me something, or I can learn from. Or I should be mentoring people that I can teach stuff to, or I can give wisdom to. Usually those relationships are, I want somebody who’s older than me or wiser, or who is in a position higher than me, or I want to mentor someone who is in a lower position than me or who is younger than me. I think one of the things that I’ve benefited from is this idea of peer mentorship. Finding somebody who’s in the same position as you, title, maybe the same age as you, but has a completely different background than you, or just a different approach to the job. I think this is an often-ignored aspect of mentoring. Companies mostly have formal mentorship programs, where they’re matching you with someone who’s higher position or matching you with somebody who’s lower for you to be a mentor to. I think sometimes this traditional approach skips over that peer mentoring aspect. A few areas where I think that having this peer mentor can help is someone who complements your skill set. Maybe there’s an area where you think you struggle with a lot that you can find somebody else who’s in a similar position with you, and establish this peer mentoring where they’ll complement either within the company to work on a project or just another way to do rubber ducking. Someone you can rubber duck with to throw ideas at and work on interesting projects with or someone you could just honestly commiserate with on shared struggles. If you want to vent about something you’re dealing with at work, sometimes somebody who may have similar struggles in another company, or in a slightly different role within your company can be useful. You don’t have to worry about it messing up any formal mentorship relationships you have.

Next topic is impostor syndrome. I just wanted to touch on a little bit because I think it’s something that comes up a lot in our industry. I myself have faced this many times. I think starting at a new company is especially a challenge with this because you’re coming in as this high position as a staff-plus. I’ve had this happen to me where people say, you’re our new principal engineer, I can’t wait to see the things that you work on in the next couple months. That’s very nerve-racking as somebody who’s like, ok, now I need to make sure that I’m living up to expectations. Artists can often feel impostor syndrome as well. They’re looking at all these famous artists out there within history, or comparing themselves to those, how can I possibly create something that would be as good as that? Despite outward success, and a set of successful work, you still feel like a fraud. I think these feelings are natural, and many people have them, so take a little bit of comfort in that. Allowing yourself to fail and feel uncomfortable and exposed to the challenge, you can overcome self-doubt, and this feeling by being open and authentic, and sharing your experience with others. A few of the ways I think I deal with this, where I found useful for other folks is just be open and authentic. Let go off perfectionism. No one’s perfect. Cultivate a bit of self-compassion, and share in each other’s failures. You often only see someone’s successes, but know in the background that they’ve had setbacks too. Rejected conference proposals, articles. Find a colleague that you can do this with and celebrate successes with your team. Make sure you’re celebrating wins.

We talked about impostor syndrome just now, but there’s also this idea of just generally embracing discomfort. I think that’s an important aspect of growth. Artists like us suffer rejection quite often. You need to know that you’re good, you’re good at what you do, your creation is worthwhile, and you’ll grow from this experience. Oftentimes, I think we want to just look at the headline. The story is not the headline. We miss out on nuance and depth and taking the easy path. For me, embracing discomfort is all about facing challenges you don’t like, not taking the easy path or doing the easy items off your to-do list. Tackle the hard problems, don’t snack. Will Larson has a really good article on his blog about snacking, and doing only the easy tasks and never taking on the hard challenges. Oftentimes, the hard things are the things that might result in failure, or worse, some humiliation. No one likes humiliation. Hopefully you work in an environment that embraces failure. We need to be comfortable putting ourselves out there with our solutions, like artists do, taking criticism and using it to improve. Don’t try something just because you might think you’ll fail. There’s a lot of cliches about failing, learning. At the core of it, many of them are true. We learn from mistakes, big and small. Make time to try the hard things, get your hands dirty, and put your creations out in the world. Part of embracing this work and discomfort is making time for it. Let’s talk a little bit about that in the next section.

I think many of us are obsessed with this idea of time management and squeezing every last second out of the day to be productive. I read another book recently by Oliver Burkeman, it’s called, “Four Thousand Weeks.” I thought it was going to be about time management, but it wasn’t really so much about time management, as it was a philosophical look at how much time we have in our lives. The title, “Four Thousand Weeks” is the amount of weeks you would live if you live to be 80, which is a little scary that you only have 4000 weeks, or might. This book really gets at the core of how we should value our time, and worry less about trying to squeeze every last minute out of the day to be productive. In some ways, our obsession about productivity is making us more miserable. Just take comfort in the fact that your to-do list will never be done. We think if we just finished this project, this presentation, then we’ll be done. Like I think I said to myself this week, “After this talk is done, then I can relax,” which is not true. You’re never done. There’s a chapter in the book that discusses learning patience, by staring at a painting for three hours. This was an assignment that a teacher gave to students where they had to go sit in an art museum and just look at a painting for three hours. This may seem like torture, but the lesson here is to force students to learn patience. We want everything immediately. Why would I spend three hours looking at a painting? That’s so unproductive. We need to give ourselves time to think, to daydream, to use the skills I discussed earlier, learning to be an active listener. Make time for unplanned work in your day. Most of us wake up with the to-do list. We jump on meetings. We immediately start working. Try to set aside some time where you have nothing planned. In this unplanned time, think of a problem you may have been trying to solve recently. Maybe you go for a walk, change up your environment. Anything that would help you get out of the trappings of your reasonable day.

Inception to Creation

Let’s wrap up some of these ideas and lessons into a format we can use to embrace and grow our creativity. There this idea that Rick, going back to the book, talks about, which is this idea of taking your ideas from inception to creation through four steps. The first is seeds. This is where you collect as many ideas as you can. You open up your attention to the world. Let inspiration in the world around you collect these seeds that you can plant later. The job here is not to judge the ideas or think too much about them, it’s just to collect them so you can reflect later. The next is experimentation. This is where you play with some of the high potential seeds, exploring them in whatever ways you can. You can imagine so you can start seeing which ones have potential life. You’re not doing any editing here, just a lot of experimental play to see where you may focus your attention later. Your level of excitement over time is a good metric for selecting what seeds to focus on and take them to the next area. The next is crafting. This is the phase when you have ideated, experimented freely, and you have a clear sense of direction. You may find yourself rotating back to the experimentation phase as you begin to refine your ideas, learning more information that helps direct your art. While you’re executing in the stage, you’re still open and adaptive to the many possibilities of your work. Then lastly is completion. This is the final phase, one that can be supported by a deadline to help bring your art to the world. This may be more about refining, than it is to complete, which means that is the best you can make it. It’s less about discovery and building as those are for earlier stages. It’s helpful not to extend the stage for too long, because you may lose a sense of connection to the work.

Key Takeaways and Themes

The headline is not the story. We don’t pay enough attention. I’m sure a lot of us look at news articles online, I do this all the time, and you don’t read past the headline. We miss nuance and depth when we live on the surface. Discipline is not a lack of freedom, it’s a harmonious relationship with time. Managing your schedule and daily habits well is a necessary component to freeing up practical and creative capacity to make great art. It seems counterintuitive to make time, to have free thinking time. It’s good to make that part of your day. Make time in your day to not work on something specific, let yourself daydream. Everyone is a creator. Become a better listener. Try to cultivate beginner’s mindset, and increase the sensitivity of your antenna. Don’t try to micromanage every aspect of your day. Set aside one to two hours in the morning or afternoon. Block off your calendars if you can, at least twice a week for unplanned work. I know that sounds ironic, planning for unplanned work, but trust me, it’s worth it, whenever you think you’ll be least distracted, if it’s the morning, midday, afternoon. Second is build new relationships. Try to meet at least one new person in your company once a month. Join or start some form of peer mentoring program. Maybe use a coffee or a Slack bot to meet new people. At my last job I had a goal of meeting every distinguished engineer at the company over the course of the year. I left before I achieved that, but it was a good goal to have. Then last is working on improving your active listening. Make your antenna more sensitive. Meditation is not for everyone, but find something that works for you: walks, quiet time. Commit to spending 30 minutes a few times a week exploring this space to improve.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


12 Top Open Source Databases to Consider – TechTarget

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Databases are fundamental to modern IT — both traditional on-premises ones and newer cloud databases help facilitate all manner of applications. Initially, the database market was dominated by proprietary technologies owned and controlled by a single vendor. Such products are still widely used, but open source databases have also gained broad user adoption.

Open source software offers user organizations the promise of source code developed in the open, typically in a community-driven process. The aim is to expand the number of people involved in the development process and not lock users into a specific vendor’s technology. The increased use of open source databases was partly spurred by the rise in popularity of Linux and cloud-based systems, which often rely on the open source OS. It was also driven by the emergence of NoSQL database technologies, many of which adhere to or align with the open source model.

What are open source databases?

Open source databases are developed and released under an open source license. While open source is sometimes used as a marketing term, it has a very specific definition when it comes to software licenses. To be a bona fide open source technology, a database needs to use a license approved by the Open Source Initiative. The OSI is the governing body that determines whether licenses adhere to the Open Source Definition (OSD), which is the guiding document for open source licensing.

Things have become more complicated, though. A growing number of vendors that created open source databases have adopted licenses that largely adhere to the tenets of the OSD but aren’t OSI-approved. Most commonly, that’s because they require cloud providers offering database as a service (DBaaS) implementations of a technology to publicly release modified or related source code under the same license. Such licenses are typically referred to as source available ones. Nonetheless, databases licensed under them are often still listed together with technologies that remain fully open source as alternatives to proprietary, closed source databases.

This article is part of

The broad category of open source and source available databases contains various types of database software that support different applications. That includes SQL-based relational databases, the most widely used type, and the four primary NoSQL technologies — key-value stores, document databases, wide-column stores and graph databases. Open source versions of special-purpose systems, such as vector databases and time series databases, are also available. In addition, many vendors now offer multimodel databases that support more than one data model.

Potential benefits of using open source databases

Open source databases offer many potential benefits, some of which also apply to source available technologies. The following are among the primary benefits for user organizations:

  • Easy to get started. A core premise of the open source approach is that the technology is freely available. As a result, users can easily try out and deploy an open source database without first having to pay for it, although vendors do offer paid support as well as closed source versions of databases with additional features in many cases.
  • Community support and engagement. Open source or source available code typically comes with a community of engaged users and contributors who can help new users with the technology. It also enables a degree of participation in the code development process. For example, users can submit bug reports and feature requests and become contributors themselves.
  • Understandable source code. When source code is open and can be viewed by anyone, there’s a better chance to understand how a database works and how it can be used effectively to meet business needs.
  • Flexibility and customization. With some open source licenses, developers are free to modify the database software to meet specific custom requirements.
  • Improved security. Because the source code is open, developers, users and security researchers can thoroughly scrutinize it to identify vulnerabilities. That enables rapid patching of vulnerabilities after they’re discovered.

The technologies listed below are some of the most prominent open source and source available databases. The list was compiled by TechTarget editors based on research of the database market, including vendor rankings by Gartner and database management system (DBMS) popularity rankings on the DB-Engines website. However, the list itself is unranked. It includes five relational open source databases, three NoSQL ones and four source available technologies, organized in that order.

The writeups about each technology provide details on key features, potential use cases, licensing and commercial support options to help organizations choose the right open source database for their application needs.

1. MySQL

MySQL is among the most widely deployed open source databases. It was first released in 1996 as an independent effort led by Michael “Monty” Widenius and two other developers, who co-founded MySQL AB to create the database. The company was acquired in 2008 by Sun Microsystems, which was then bought by Oracle in 2010. MySQL has remained a core part of Oracle’s database portfolio ever since while being maintained as open source software.

A relational database, MySQL was originally positioned as an online transaction processing (OLTP) system and is still primarily geared to transactional uses, although Oracle’s MySQL HeatWave cloud database service now also supports analytics and machine learning applications. MySQL gained much of its early popularity as a cornerstone of the LAMP stack of open source technologies — Linux, Apache, MySQL and PHP, Perl or Python — that powered the first generation of web development. It continues to be an underlying database on many websites today.

Common use cases: Like other relational databases, MySQL complies with the ACID properties — atomicity, consistency, isolation and durability — for ensuring data integrity and reliability. Because of that, it supports a broad range of applications. For example, MySQL is commonly used as a web application server and to run cloud applications and content management systems.

Licensing: MySQL is dual-licensed under the GPL version 2 open source license and an Oracle one for organizations looking to distribute the database along with commercial applications.

Source code repository: https://github.com/mysql/mysql-server

Commercial support options: There are numerous commercial implementations of MySQL. Oracle provides multiple options in addition to MySQL Heatwave, including Enterprise and Standard editions and an embedded version. MySQL is also available in the cloud as part of the Amazon Relational Database Service (RDS) from AWS, as well as Google’s Cloud SQL and Microsoft’s Azure Database services. Vendors such as Aiven, PlanetScale and Percona offer MySQL cloud services, too.

2. MariaDB

MariaDB debuted in 2009 as a fork of MySQL that was created by a team also led by Widenius, who left Sun early that year because he was concerned about the direction and development of MySQL. Work on MariaDB started when he was still at Sun, and it was originally designed to be a drop-in replacement for MySQL. But that was only fully the case until the 5.5 releases of the two databases. After that, new features not in MySQL were added to MariaDB, which used different numbering on subsequent releases.

Even with newer updates, though, it’s still relatively easy to migrate from MySQL to MariaDB. The latter’s data files are generally binary compatible with MySQL ones, and the client protocols of the databases are also compatible. As a result, users in many cases can simply uninstall MySQL and install MariaDB to change between them. MariaDB PLC, which leads development of the software through the MariaDB Foundation, maintains a list of incompatibilities and feature differences with MySQL.

Common use cases: MariaDB is commonly used for the same purposes as MySQL, including in web and cloud applications involving both transaction processing and analytics workloads.

Licensing: The free MariaDB Server software — referred to by the company as MariaDB Community Server — is released under the GPLv2 license.

Source code repository: https://github.com/MariaDB/server

Commercial support options: MariaDB PLC sells a MariaDB Enterprise Server version of the database that also supports JSON data and columnar storage. SkySQL, a company that was spun out of MariaDB PLC in late 2023, offers a fully managed DBaaS implementation. MariaDB is also available as part of Amazon RDS and Azure Database, although Microsoft plans to retire its offering in September 2025.

3. PostgreSQL

PostgreSQL got its start as Postgres in 1986 at the University of California, Berkeley. The Postgres project was initiated by relational database pioneer Michael Stonebraker, then a professor at the school, as a more advanced alternative to Ingres, a proprietary RDBMS that he also played a lead role in developing. The software became open source in 1995, when a SQL language interpreter was also added, and it was officially renamed PostgreSQL in 1996. Decades later, though, PostgreSQL and Postgres are still used interchangeably by developers, vendors and users to refer to the database.

PostgreSQL offers full RDBMS features, including ACID compliance, SQL querying and support for procedural language queries to create stored procedures and triggers in databases. Like MySQL, MariaDB and many other database technologies, it also supports multiversion concurrency control (MVCC) so data can be read and updated by different users at the same time. In addition, PostgreSQL supports other types of database objects than standard relational tables, and it’s described as an object-relational DBMS on the open source project’s website.

Common use cases: PostgreSQL is commonly positioned as an open source alternative to the proprietary Oracle Database. It’s widely used to support enterprise applications that require complex transactions and high levels of concurrency, and sometimes in data warehousing operations.

Licensing: The software is available under the OSI-approved PostgreSQL License.

Source code repository: https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary

Commercial support options: PostgreSQL has a wide range of commercial support and cloud offerings. EDB, formally known as EnterpriseDB, specializes in PostgreSQL and provides both self-managed and DBaaS versions in the cloud. Managed PostgreSQL cloud services are also available from AWS, Google, Microsoft and Oracle, as well as vendors such as Aiven, Percona and NetApp’s Instaclustr subsidiary.

4. Firebird

The Firebird open source relational database’s technology roots go back to the early 1980s, when the proprietary InterBase database was created. After InterBase was acquired by multiple vendors, commercial product development ended and the final release was made available under an open source license in 2000. Within a week, the Firebird project was created to continue developing a fork of the technology.

Firebird supports ACID-compliant transactions, external user-defined functions and various standard SQL features, and it includes a multi-generational architecture that provides MVCC capabilities. The software has a relatively small footprint and is available in an embedded single-user version, but it can also be used to run multi-terabyte databases with hundreds of concurrent users. It shouldn’t be confused with Firestore and Firebase Realtime Database, two commercial NoSQL databases developed by Google.

Common use cases: Firebird can handle both operational and analytics applications. It’s used in various types of enterprise applications, including ERP and CRM systems.

Licensing: Firebird is made available under the InterBase Public License (IPL) and the Initial Developer’s Public License (IDPL). Both are variants of the Mozilla Public License Version 1.1, which is OSI-approved though now superseded by Version 2.0. The IPL covers the source code from InterBase, while the IDPL applies to added or improved code developed as part of the Firebird project.

Source code repository: https://github.com/FirebirdSQL/firebird

Commercial support options: Firebird is an independent open source project not driven by a particular vendor, and the software is free to use, including for commercial purposes. The Firebird website lists six companies that provide commercial support, consulting and training services. Firebird cloud services running on Windows Server 2019 are available for purchase in the AWS, Azure and Google clouds, although support will end on the Google Cloud one in August 2024.

5. SQLite

SQLite is a lightweight embedded RDBMS that runs inside applications. It was created in 2000 by computer analyst and programmer D. Richard Hipp while he was working as a government contractor in support of a U.S. Navy project, which needed a database that could run without a database administrator (DBA) in environments with minimal resources. Hipp continues to lead development of the software as project architect through Hipp, Wyrick & Company Inc., a software engineering firm commonly known as Hwaci for short.

As an embedded database, SQLite is self-contained, meaning it’s fully functional within the application it powers. The software is a library that embeds a full-featured SQL database engine supporting ACID transactions. There are no separate database server processes. Data reads and writes are done directly to ordinary disk files, and a complete SQLite database that includes tables, indices, triggers and views can be contained in a single file.

Common use cases: SQLite is commonly used in mobile applications, web browsers and IoT devices due to its small footprint and ability to operate without a separate server process.

Licensing: The SQLite source code is in the public domain and is free to use, modify and distribute for any purpose without a license. Hwaci does sell a warranty of title with a perpetual right-to-use license to organizations that want one for legal reasons.

Source code repository: https://sqlite.org/src/doc/trunk/README.md

Commercial support options: Hwaci provides paid technical support, maintenance and testing services, and it offers a set of proprietary extensions to SQLite that are sold under separate licenses. As with Firebird, SQLite database services are available on AWS, Azure and Google Cloud, in this case all running on Ubuntu Server 20.04.

6. Apache Cassandra

The Cassandra wide-column store traces its roots back to 2007, when it was originally developed by Facebook to support a new inbox search feature that was being added to the social network. The NoSQL database was open sourced in 2008 and became part of the Apache Software Foundation in 2009, initially as an incubator project before it was elevated to top-level project status the following year.

Cassandra is a fault-tolerant distributed database that can be used to store and manage large amounts of data across a cluster consisting of numerous commodity servers. The software replicates data on multiple server nodes to avoid single points of failure, and it can be scaled dynamically by adding more servers to a cluster based on processing demand. Cassandra currently provides eventual consistency, which can limit its transactional uses due to temporary data inconsistencies, but the Apache project is working to add support for ACID transactions.

Common use cases: Cassandra is designed for uses that require fast performance, scalability and high availability. It’s deployed for various applications, including inventory management, e-commerce, social media analytics, messaging systems and telecommunications, among others.

Licensing: The Cassandra software is covered by the Apache License 2.0.

Source code repository: https://github.com/apache/cassandra/tree/trunk

Commercial support options: Multiple vendors provide commercial support for Cassandra and DBaaS versions of the database, including DataStax, Aiven and Instaclustr. Amazon Keyspaces (for Apache Cassandra) and Azure Managed Instance for Apache Cassandra are also available as database services from AWS and Microsoft, respectively.

7. Apache CouchDB

CouchDB is a NoSQL document database that was first released in 2005 by software engineer Damien Katz and became an Apache project in 2008. The Couch part of the name is an acronym for “cluster of unreliable commodity hardware,” which stems from the project’s original goal: to create a reliable database system that could run efficiently on ordinary hardware. CouchDB can be deployed on one server node but also as a single logical system across multiple nodes in a cluster, which can be scaled as needed by adding more servers.

The database uses JSON documents to store data and JavaScript as its query language. Other key features include support for MVCC and the ACID properties in individual documents, although an eventual consistency model is used for data stored on multiple database servers — a tradeoff that prioritizes availability and performance over absolute data consistency. Data is synchronized across servers through an incremental replication feature that can be set up for bidirectional tasks and used to support mobile apps and other offline-first applications.

Common use cases: CouchDB is used for various purposes, including data analytics, time series data storage and mobile applications that require offline storage and functionality.

Licensing: CouchDB is licensed under the Apache License 2.0.

Source code repository: https://github.com/apache/couchdb

Commercial support options: The IBM Cloudant cloud database is based on CouchDB with added open source technology that supports full-text search and geospatial indexing. Several other companies also offer support for CouchDB, including packaged instances in the AWS, Azure and Google clouds.

8. Neo4j

Neo4j is a NoSQL graph database that’s well suited for representing and querying highly connected data sets. Neo4j uses a property graph database model consisting of nodes, which represent individual data entities, and relationships — also referred to as edges — that define how different nodes are organized and connected. Nodes and relationships can also include properties, or attributes, in the form of key-value pairs that further describe them.

First released as open source software in 2007, Neo4j is overseen by database vendor Neo4j Inc. It originally was solely a Java-based graph database, but it has been expanded with additional capabilities, including vector search and data storage. Key features include full ACID compliance, horizontal scalability through an Autonomous Clustering architecture and the Cypher query language. Neo4j Inc. plans to converge Cypher with GQL, a standard graph query language published by the International Organization for Standardization in April 2024 that uses syntax based on both SQL and Cypher.

Common use cases: Typical uses for Neo4j include social networking, recommendation engines, network and IT operations, fraud detection and supply chain management, with generative AI applications also now supported through the vector search feature.

Licensing: Neo4j Community Edition is licensed under the GPL version 3. An open source version of Cypher named openCypher is also available under the Apache License 2.0.

Source code repository: https://github.com/neo4j/neo4j

Commercial support options: Neo4j Inc. provides several supported commercial offerings, including a Neo4j Enterprise Edition with added closed source components and the subscription-based Neo4j Aura cloud database service.

9. Couchbase Server

Couchbase Server is a NoSQL document database with multimodel capabilities for storing data both in JSON documents and as key-value pairs. The technology resulted from the 2011 merger of two open source database companies: CouchOne, which had been founded by CouchDB creator Damien Katz to offer systems based on that database, and Membase, which was set up to build a key-value store by developers of the memcached distributed caching technology. The combined company became Couchbase, leading to the development of Couchbase Server.

Despite their similar names and partly shared origins, Couchbase Server and CouchDB aren’t directly related or compatible — they’re different database technologies with their own code and APIs. Couchbase Server supports strong consistency, distributed ACID transactions and SQL++, a SQL-like language for JSON data querying. It also includes both vector and full-text search capabilities plus a multidimensional scaling feature that enables different database functions to be isolated and separately scaled up based on workload demands.

Common use cases: Couchbase Server is often used to support distributed application workloads and for mobile, edge and IoT applications.

Licensing: Originally available under the Apache License 2.0, Couchbase Server was switched in 2021 to the Business Source License (BSL) 1.1, a source available license that restricts commercial use of the software by other vendors. Database releases are converted back to the Apache open source license four years after they become available.

Source code repository: https://github.com/couchbase/manifest

Commercial support options: Couchbase offers an enterprise edition of Couchbase Server for cloud and on-premises deployments, as well as a mobile version of the database and a fully managed DBaaS technology named Couchbase Capella.

10. MongoDB

MongoDB is another NoSQL document database that was initially developed as open source software and is now a source available technology. First released in 2009, MongoDB stores data in a JSON-like document format named BSON, which is short for Binary JSON. As the full name indicates, BSON encodes data in a binary structure that’s designed to support more data types and faster indexing and querying performance than JSON provides.

The database is often seen as an attractive option for developers that want to build applications without the constraints of a fixed schema. In addition to its document data model, MongoDB includes native support for graph, geospatial and time series data. MongoDB Atlas, a cloud database service offered by lead developer MongoDB Inc., also provides vector and full-text search features that can be used free of charge for development and testing in local environments. Other key features in MongoDB include multi-document ACID transactions, sharding for horizontal scalability and automatic load balancing.

Common use cases: MongoDB is widely deployed for uses that include AI, edge computing, IoT, mobile, payment and gaming applications, as well as website personalization, content management and product catalogs.

Licensing: Since 2018, new versions of MongoDB Community Server and patches for previous releases have been made available under the Server Side Public License (SSPL) Version 1, a source available license created by MongoDB Inc.

Source code repository: https://github.com/mongodb/mongo

Commercial support options: In addition to MongoDB Atlas, MongoDB Inc. offers a self-managed MongoDB Enterprise Server that also provides additional capabilities beyond what’s in the community edition. MongoDB support and managed services are also available from vendors such as Datavail and Percona. Amazon DocumentDB (with MongoDB compatibility) is a fully managed DBaaS offering from AWS that supports versions 4.0 and 5.0 of MongoDB but not the newer 6.0 and 7.0 releases.

11. Redis

Redis is a NoSQL in-memory database that was converted to a source available technology in March 2024. The Redis project was created in 2009 by software programmer Salvatore Sanfilippo, known by the nickname antirez, to help solve a database scaling problem with a real-time website log analysis tool. Short for Remote Dictionary Server, Redis originally was positioned as software that provided a key-value data store as a caching technology to accelerate existing databases and application workloads.

The database caching functionality remains the foundation of Redis, with features that include built-in replication, on-disk data persistence and support for complex data types. But the platform has been expanded to include additional capabilities, such as support for storing JSON documents and both vector and time series data. A graph database module was also added, but lead developer Redis Inc. stopped developing it in 2023.

Common use cases: While Redis can be used as a full database, one of its most common uses is still as a database query caching layer. It’s also often used to support real-time notifications through an integrated pub/sub capability and as a session store to help manage user sessions for web and mobile applications.

Licensing: As of March 2024, the core Redis software is dual-licensed under the Redis Source Available License 2.0 and the SSPL v1. The added database modules and a Redis Stack bundle that combines them have been covered by those licenses since 2022.

Source code repository: https://github.com/redis/redis

Commercial support options: Redis Inc. provides a closed source Redis Enterprise offering under a commercial license, as well as a fully managed Redis Cloud service. Microsoft’s Azure Cache for Redis is a managed service that includes both the core Redis software and Redis Enterprise as options. Redis managed services are available from Aiven and Instaclustr, too. In addition, AWS, Google and Oracle offer cloud services with Redis compatibility.

12. CockroachDB

CockroachDB is a source available distributed SQL database loosely inspired by Google’s proprietary Spanner database. Developed primarily by vendor Cockroach Labs, CockroachDB was first released in 2015, with an initial production version appearing two years later. Just like the insect it’s named after, a core design goal of CockroachDB is to be hard to kill. The cloud-native database is built to be a fault-tolerant, resilient and consistent data management platform.

CockroachDB scales horizontally and can survive various types of equipment failures with minimal disruptions to users and no manual intervention required by DBAs, according to its developers. Key features include automated repair and recovery, support for ACID transactions with strong consistency, a SQL API and geo-partitioning of data to boost application performance. It also has a “multi-active availability” model that enables users to read and write data from any cluster node with no conflicts.

Common use cases: CockroachDB is well suited for high-volume OLTP applications and distributed database deployments across multiple data centers and geographic regions.

Licensing: Since 2019, most of CockroachDB’s core features have been licensed under a version of the BSL that requires other vendors to buy a license from Cockroach Labs if they want to offer a commercial database service. Other core features are covered by the Cockroach Community License (CCL), which allows source code to be viewed and modified but not reused without an agreement on that with Cockroach Labs. The features licensed under the BPL convert to the Apache License 2.0 and become open source three years after a new database release, a change that doesn’t apply to the CCL ones.

Source code repository: https://github.com/cockroachdb/cockroach

Commercial support options: Cockroach Labs provides technical support and additional paid enterprise features that are available in both self-managed and DBaaS deployments.

Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Terraform 1.8 Adds Provider-Defined Functions, Improves AWS, GCP, and Kubernetes Providers

MMS Founder
MMS Matt Campbell

Article originally posted on InfoQ. Visit InfoQ

HashiCorp has released version 1.8 of Terraform, their infrastructure-as-code language. The release introduces provider-defined functions. This enables the creation of custom functions within a given provider that handle computational-style tasks. Several providers, including AWS, GCP, and Kubernetes, have introduced new provider-defined functions alongside this release. Version 1.8 also introduces improvements to refactoring across resource types.

Provider-defined functions can be used in any Terraform expression with the following calling syntax: provider::provider_name::function_name(). These functions can perform several tasks, including data transformation, parsing data, assembling data, and simplifying validations and assertions.

Coinciding with the release of Terraform 1.8, several Terraform providers have been updated to include provider-defined functions. The 5.40 release of the Terraform AWS provider now has provider-defined functions to parse and build ARNs (Amazon Resource Names). For example, arn_parse can be used to retrieve the account ID for a given resource:

# create an ECR repository
resource "aws_ecr_repository" "hashicups" {
  name = "hashicups"
  
  image_scanning_configuration {
    scan_on_push = true
  }
}
 
# output the account ID of the ECR repository
output "hashicups_ecr_repository_account_id" {
  value = provider::aws::arn_parse(aws_ecr_repository.hashicups.arn).account_id
}

Included in the 5.23 release of the Terraform Google Cloud provider is a function to parse regions, zones, names, and projects from resource IDs that are not managed within the Terraform configuration.

resource "google_cloud_run_service_iam_member" "example_run_invoker_jane" {
  member   = "user:jane@example.com"
  role     = "run.invoker"
  service  = provider::google::name_from_id(var.example_cloud_run_service_id)
  location = provider::google::location_from_id(var.example_cloud_run_service_id)
  project  = provider::google::project_from_id(var.example_cloud_run_service_id)
}

The 2.28 release of the Terraform Kubernetes provider includes a provider-defined function for encoding and decoding Kubernetes manifests into Terraform.

resource "kubernetes_manifest" "example" {
  manifest = provider::kubernetes::manifest_decode(file("${path.module}/manifest.yaml"))
}

Version 2.30.0 of the HashiCorp Terraform extension for Visual Studio Code includes syntax highlighting and auto-completion support for provider-defined functions.

OpenTofu, the recent fork of Terraform, has indicated that they will be adding support for provider-defined functions. User janosdebugs posted in the OpenTofu GitHub repo that “provider-implemented functions have been presented to the TSC [Technical Steering Committee] and is being planned for OpenTofu 1.8.”. At the time of writing, OpenTofu is on version 1.6.2.

The release also introduces new functionality to move supported resources between resource types in a faster and less error-prone manner. This enhances the moved block behavior to support moving between resources of different types if the target resource type declares it can be converted from the source resource type. Providers can add this support to handle various use cases such as renaming a provider or splitting a resource.

Terraform 1.8 is available now from GitHub or within Terraform Cloud. More details about the release can be found on the HashiCorp blog, the upgrade guide, and in the changelog.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Launches Program to Help Enterprises Implement GenAI – PYMNTS.com

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB has launched a program designed to help enterprises embed generative artificial intelligence (GenAI) into their applications.

The new MongoDB AI Applications Program (MAAP) offers strategic advisory, professional services and an integrated technology stack from MongoDB and its partners, the developer data platform said in a Wednesday (May 1) press release.

“This program combines our robust developer data platform, MongoDB Atlas, with our own expertise, professional services and strategic partnerships with leaders in generative AI technologies to provide comprehensive roadmaps for organizations of all sizes to confidently adopt and implement generative AI,” Alan Chhabra, executive vice president of worldwide partners at MongoDB, said in the release.

Among the partners that have joined MAAP and will provide strategic consulting, technology and expertise are AnthropicAmazon Web Services (AWS), Google Cloud and Microsoft, according to the release.

These and other partners also provide foundation models, cloud infrastructure and GenAI framework and model hosting, the release said.

Combining these partners’ resources with MongoDB’s unified developer data platform, MAAP will help customers develop strategies and roadmaps to deploy GenAI applications, build GenAI applications that are secure and reliable, and engage in GenAI “jump-start” sessions with industry experts, per the release.

MAAP is designed to help organizations integrate GenAI to “solve the right business problems,” Chhabra said in the release.

“With the MongoDB AI Applications Program, we and our partners help customers use generative AI to enhance productivity, revolutionize customers interactions and drive industry advancements,” Chhabra said.

PYMNTS Intelligence has found that GenAI has taken off exponentially, with new companies popping up nearly every day and promising labor- and cost-saving applications in every field. These applications touch every corner of human life, according to “Preparing for a Generative AI World,” a PYMNTS Intelligence and AI-ID collaboration.

In another recent move, MongoDB partnered with automated evaluation and security platform Patronus AI in January to bring automated large language models (LLMs) evaluation and testing capabilities to enterprise companies.

This partnership offers a solution that enables enterprises to develop reliable document-based LLM workflows, building the systems with the support of MongoDB Atlas and using Patronus AI to evaluate, test and monitor them.

For all PYMNTS B2B coverage, subscribe to the daily B2B Newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB launches ‘one-stop shop’ program for enterprises to build generative AI solutions

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Cloud database provider MongoDB Inc. today announced the launch of a program that will help organizations rapidly build and deploy modern applications using generative artificial intelligence technology on an enterprise scale, in partnership with advisory, professional services and end-to-end technology stacks for hosting.

The company said the new project, called the MongoDB AI Applications Program, is designed to be a “one-stop shop” solution for enterprises looking to quickly and effectively embed generative AI into applications. Given how quickly the technology has taken the world by storm and how complex it is to build and deploy, it’s becoming necessary to find a way to ease the transition.

We’ve seen tremendous enthusiasm for generative AI among our customers, from agile startups to established global enterprises,” said Alan Chhabra, executive vice president of worldwide partners at MongoDB. “These organizations leverage MongoDB’s cutting-edge technology and comprehensive services to transform innovative concepts into real-world applications. However, some are still navigating how best to integrate generative AI to solve the right business problems.”

Gartner Inc. predicts that by 2026, more than 80% of enterprise businesses will deploy generative AI apps in production environments, up from just 5% in 2023. Companies such as Microsoft Corp., OpenAI, Google LLC and Amazon Web Services Inc. are bringing enterprise-scale generative AI solutions onto the market opening up the industry. A recent report from Fortune Business Insights estimated the global generative AI market size was valued at $43 billion in 2023 and predicts that it will rise to $968 billion by 2032.

MongoDB said that with the rise of generative AI, there’s a clear precedent set that to compete in the new order organizations must modernize and build a strategy to meet customer expectations. However, the technology behind generative AI may be out of reach when it comes to complexity and cost.

MAAP addresses this by bringing together a large number of partners including industry-leading consultancies, expert professional services and assistance to build technology roadmaps to help them use their business acumen to build a generative AI strategy. MAAP uses partnerships with industry-leading cloud infrastructure and AI model hosting providers, including Anthropic PBC, Anyscale Inc., AWS, Cohere Inc., Google Cloud and Microsoft Azure, to help businesses rapidly scale and get into production.

Enterprise developers also gain access to MongoDB Atlas, the company’s fully managed multicloud database and developer program, alongside MongoDB’s own expertise and professional services. It can be used to build and deploy generative AI applications that need quick access to personalized company data, making them easier to customize for semantic search, text-to-image searching and recommendation systems.

“With the MongoDB AI Applications Program, we and our partners help customers use generative AI to enhance productivity, revolutionize customer interactions, and drive industry advancements,” said Chhabra.

Photo: Robert Hof/SiliconANGLE

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.