Axiros Launches New Release of their QoE Monitoring Solution – AXTRACT 3.5.0

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Monitoring and Management Of Customer QoE For Data And VoIP Services

MUNICHMarch 27, 2024PRLogAxiros, a leader in TR-069/TR-369, Unified Device Management, and zero-touch customer experience solutions, has announced the launch of version 3.5.0 of its Quality of Experience (QoE) Monitoring Solutions – AXTRACT.

The Axiros AXTRACT is a Quality of Experience (QoE) monitoring solution which is the process of assessing the overall quality of a customer’s experience with a brand or service. It involves collecting and analyzing data on various factors that can impact customer satisfaction, such as wait times, response times, unsuccessful attempts, error rates, and more.

QoE monitoring can help telco brands and ISP providers identify areas where customers are having a poor experience and make necessary changes to improve the quality of their service. In today’s competitive market, providing an excellent customer experience is essential for success. QoE monitoring can play a vital role in helping telcos and ISPs meet this goal.

Important highlights from this release are:

  • Core modules for large-scale USP deployments
  • AXTRACT and Ansible chroot using Debian 11 (bullseye)
  • Ships and supports MongoDB-5.0 (MongoDB-4.2 is still supported)
  • Ships and supports ClickHouse-2024-08-lts
  • Ships and supports Kafka-3.5.1
  • Ships and supports Grafana-10

“This release has a high focus on bringing the latest supported software to our customers to stay ahead in the security game. On top of that the added USP functionally brings users of AXTRACT into a position to monitor their millions of USP devices in a scalable manner,” said Stephan Bentheimer, the Lead of AXTRACT Development at Axiros.

For any technical questions about AXTRACT, please contact sales@axiros.com.

About Axiros (mailto:sales@axiros.com)

Any Device. Any Protocol. Any Service. Any Time | We manage all THINGS.

For over 20 years, Axiros has been a leading provider of providing software solutions and platforms (https://www.axiros.com/solutions) for Device Management in telecommunications and other industries. Axiros specializes in offering Device Management software that enables seamless integration and management of devices and services, leveraging industry standards like TR-069 and USP. With a global presence (https://www.axiros.com/office-locations), Axiros is committed to delivering innovative solutions that meet the evolving needs of businesses in today’s digital landscape. To learn more about AXIROS, visit www.axiros.com

Photos: (Click photo to enlarge)

Axiros GmbH Logo

Source: Axiros GmbH

Read Full Story – Axiros Launches New Release of their QoE Monitoring Solution – AXTRACT 3.5.0 | More news from this source

Press release distribution by PRLog

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Couchbase, Inc. (NASDAQ:BASE) SVP Huw Owen Sells 11,581 Shares – MarketBeat

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Couchbase, Inc. (NASDAQ:BASEGet Free Report) SVP Huw Owen sold 11,581 shares of the business’s stock in a transaction that occurred on Friday, March 22nd. The shares were sold at an average price of $26.79, for a total value of $310,254.99. Following the transaction, the senior vice president now directly owns 441,454 shares in the company, valued at approximately $11,826,552.66. The transaction was disclosed in a legal filing with the SEC, which is accessible through this hyperlink.

Huw Owen also recently made the following trade(s):

  • On Thursday, January 25th, Huw Owen sold 1,376 shares of Couchbase stock. The shares were sold at an average price of $25.00, for a total value of $34,400.00.
  • On Tuesday, January 23rd, Huw Owen sold 3,500 shares of Couchbase stock. The shares were sold at an average price of $25.00, for a total value of $87,500.00.
  • On Tuesday, January 2nd, Huw Owen sold 40,604 shares of Couchbase stock. The shares were sold at an average price of $21.38, for a total value of $868,113.52.

Couchbase Trading Down 2.6 %

Shares of BASE traded down $0.69 during trading hours on Tuesday, hitting $25.98. The company had a trading volume of 273,569 shares, compared to its average volume of 511,737. The company has a market capitalization of $1.25 billion, a PE ratio of -15.28 and a beta of 0.72. Couchbase, Inc. has a one year low of $13.28 and a one year high of $32.00. The company’s fifty day moving average price is $26.80 and its two-hundred day moving average price is $21.65.

Institutional Inflows and Outflows

Several hedge funds have recently added to or reduced their stakes in BASE. Swiss National Bank increased its position in Couchbase by 7.9% during the 1st quarter. Swiss National Bank now owns 25,800 shares of the company’s stock valued at $449,000 after buying an additional 1,900 shares in the last quarter. JPMorgan Chase & Co. increased its position in Couchbase by 698.1% during the 1st quarter. JPMorgan Chase & Co. now owns 46,673 shares of the company’s stock valued at $813,000 after buying an additional 40,825 shares in the last quarter. Bank of New York Mellon Corp increased its position in Couchbase by 130.7% during the 1st quarter. Bank of New York Mellon Corp now owns 64,384 shares of the company’s stock valued at $1,122,000 after buying an additional 36,474 shares in the last quarter. MetLife Investment Management LLC acquired a new position in Couchbase during the 1st quarter valued at about $255,000. Finally, Metropolitan Life Insurance Co NY acquired a new position in Couchbase during the 1st quarter valued at about $30,000. 96.07% of the stock is owned by institutional investors and hedge funds.

Wall Street Analysts Forecast Growth

A number of equities analysts recently issued reports on BASE shares. Barclays boosted their price objective on Couchbase from $29.00 to $33.00 and gave the company an “equal weight” rating in a report on Wednesday, March 6th. Guggenheim upped their target price on Couchbase from $27.00 to $32.00 and gave the stock a “buy” rating in a report on Wednesday, March 6th. DA Davidson upped their target price on Couchbase from $27.00 to $35.00 and gave the stock a “buy” rating in a report on Wednesday, March 6th. Royal Bank of Canada upped their target price on Couchbase from $32.00 to $35.00 and gave the stock an “outperform” rating in a report on Wednesday, March 6th. Finally, Morgan Stanley upped their target price on Couchbase from $20.00 to $21.00 and gave the stock an “equal weight” rating in a report on Thursday, December 7th. Three analysts have rated the stock with a hold rating and eight have assigned a buy rating to the company’s stock. Based on data from MarketBeat.com, the company has an average rating of “Moderate Buy” and an average target price of $32.40.

Get Our Latest Analysis on BASE

About Couchbase

(Get Free Report)

Couchbase, Inc provides a database for enterprise applications in the United States and internationally. Its database works in multiple configurations, ranging from cloud to multi- or hybrid-cloud to on-premise environments to the edge. The company offers Couchbase Capella, an automated and secure Database-as-a-Service that helps in database management by deploying, managing, and operating Couchbase Server across cloud environments; and Couchbase Server, a multi-service NoSQL database, which provides SQL-compatible query language and SQL++ that allows for a various array of data manipulation functions.

See Also

Insider Buying and Selling by Quarter for Couchbase (NASDAQ:BASE)

Before you consider Couchbase, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and Couchbase wasn’t on the list.

While Couchbase currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Metaverse Stocks And Why You Can't Ignore Them Cover

Thinking about investing in Meta, Roblox, or Unity? Click the link to learn what streetwise investors need to know about the metaverse and public markets before making an investment.

Get This Free Report

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Vitess Version 19 Released: Ends Support for MySQL 5.7, Improves MySQL Compatibility

MMS Founder
MMS Aditya Kulkarni

Article originally posted on InfoQ. Visit InfoQ

Recently, Vitess launched its latest stable release v19. The highlights of this update include metrics for monitoring stream consolidations, improved query compatibility with MySQL for multi-table delete operations, support for incremental backups, and various performance enhancements, among other features.

The Vitess Maintainer Team discussed the release in a blog post which was also shared on the CNCF website. Vitess provides a database management solution designed for the deployment, scaling, and administration of large clusters of open-source database instances. At present, it provides support for MySQL and Percona Server for MySQL.

In response to Oracle marking MySQL 5.7 as end-of-life in October 2023, Vitess is aligning with these updates by discontinuing support for MySQL 5.7 in this latest release. The maintainers’ team has recommended users upgrade their systems to MySQL 8.0 while utilizing Vitess 18 before transitioning to Vitess 19. It is important to note that Vitess 19 will maintain compatibility for importing from MySQL 5.7.

The improved query compatibility with MySQL is facilitated through the SHOW VSCHEMA KEYSPACES query, along with various other SQL syntax enhancements, including the ability to perform the AVG() aggregation function on sharded keyspaces through a combination of SUM and COUNT. Furthermore, Vitess 19 broadens its support for Non-recursive Common Table Expressions (CTEs), enabling the creation of more complex queries.

To mitigate potential security vulnerabilities, communication between throttlers has been transitioned to use gRPC, discontinuing support for HTTP communication. Vitess has also introduced VSchema improvements by incorporating a --strict sub-flag and a matching gRPC field within the ApplyVSchema command. This update guarantees using only recognized parameters in Vindexes, thus improving error detection and configuration verification.

Additionally, in a move to enhance security and ensure greater system stability, the ExecuteFetchAsDBA command now rejects multi-statement inputs. Vitess plans to offer formal support for multi-statement operations in an upcoming version.

The process of Vitess migration cut-over has been updated to incorporate a back-off strategy when encountering table locks. If the initial cut-over fails, subsequent attempts will occur at progressively longer intervals, minimizing strain on an already burdened production system.

Additionally, Online DDL now offers the option of a forced cut-over, which can be triggered either after a specified timeout or on demand. This approach gives precedence to completing the cut-over, by ending queries and transactions that interfere with the cut-over process.

When it comes to general features, Vitess is equipped with JDBC and Go database drivers that comply with a native query protocol. Furthermore, it supports the MySQL server protocol, ensuring compatibility with almost all other programming languages.

Numerous companies, including Slack and GitHub, have implemented Vitess to meet their production requirements. Additionally, Vitess has managed all database traffic for YouTube for more than five years. Vitess Maintainer Team has invited the tech community to join discussions on GitHub or Slack channel, where they can share stories, pose questions, and engage with the broader Vitess community.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Copilot in Azure SQL Database in Private Preview

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has announced a private preview of Copilot for SQL Azure, which offers a natural language for SQL conversion and self-help for database administration.

Azure SQL is Microsoft’s cloud-based database service that offers a broad range of SQL database features, scalability, and security to support applications’ data storage and management needs. The Copilot for Azure SQL Database introduces two features in the Azure portal:

  • Natural Language to SQL: This feature converts natural language queries into SQL within the Azure portal’s query editor for Azure SQL Database, simplifying database interactions. The query editor utilizes table and view names, column names, primary key, and foreign key metadata to generate T-SQL code, which the user can then review and execute the code suggestion.

           

        Generate Query capability in the Azure portal query editor  (Source: Microsoft Learn)

  • Azure Copilot Integration: By incorporating Azure SQL Database capabilities into Microsoft Copilot for Azure, this feature offers users self-service guidance and enables them to manage their databases and address issues independently. Users can ask and receive helpful, context-rich Azure SQL Database suggestions from Microsoft Copilot for Azure.

           

           Sample prompt for database performance (Source: Microsoft Learn)

Joe Sack, product management Azure SQL at Microsoft, writes:

Copilot in Azure SQL Database integrates data and formulates applicable responses using public documentation, database schema, dynamic management views, catalog views, and Azure supportability diagnostics.

In addition, regarding the Query Editor, Edward Dortland, a managing director at Twintos, commented on a LinkedIn post on Copilot for SQL Azure:

Finally, I don’t have to re-read Itzik Ben-Gan’s book whenever somebody asks me to make an update on that one stored procedure that somebody before me, made 10 years ago and is all about compounding interest calculations, on combined derivatives in a banking database. Welcome Copilot for Azure query editor, I waited so long for this moment.

Azure SQL is not the only service that has Microsoft Azure Copilot integration. This Copilot supports various services and functions across Azure, enhancing productivity, reducing costs, and providing deep insights by leveraging large language models (LLMs), the Azure control plane, and insights about the user’s Azure environment. Moreover, it is not limited to Azure alone as it extends its capabilities across the Microsoft ecosystem, such as Dynamics 365, Service Management, Power Apps, and Power Automate.

Vishal Anand, a global chief technologist at IBM, concluded in a Medium blog post on Microsoft Azure Copilot:

Era has arrived where Human worker and Digital worker can collaborate in real-time to add value to cloud operations. It is currently not recommended for production workloads (obviously it is in preview phase).

The private preview of Copilot for SQL Azure is available via a sign-up page, and more information is available on the FAQ page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Streamlining Cloud Development with Deno

MMS Founder
MMS Ryan Dahl

Article originally posted on InfoQ. Visit InfoQ

Transcript

Dahl: I’m going to just give you a whirlwind tour of Deno and some subset of the features that were built into it. These days I’m working on Deno. The web is incredibly important. Year after year, it just becomes more important. This wasn’t super clear in 2020. There was a time where maybe iPhone apps were going to replace the web. It wasn’t super obvious. I think now in 2023, it’s pretty clear that the web is here to stay, and, indeed, is the medium of human information. This is how you read your newspaper. This is how you interact with your bank. This is how you talk to your friends. This is how humans communicate with each other. It is incredibly important. We build so much infrastructure on the web at this point, and the web being a very large set of different technologies. I think it is a very fair bet to say that the web is still going to be here 5 years from now, if not 10 or 20 years from now. That is maybe unsurprising to you all, but technology is very difficult to predict. There are very few situations where you can say 5 years from now, this technology is going to be here. I think this is a rare exception. We are pretty sure that JavaScript, a central protocol of the web central programming language, it will be here in the future because it’s inherently tied to the web. I think that further work in JavaScript is necessary. I think the tools that I and others developed 10-plus years ago with Node.js, and npm, and the whole ecosystem around that, are dating fairly poorly over the years. I think further investment here is necessary.

The Node Project

With Node, my goal was to really force developers to have an easy way to build fast servers. The core idea in Node, which was relatively new at the time was, to force people to use async I/O. You had to do non-blocking sockets, as opposed to threading. With JavaScript, this was all bundled up and packaged in a nice way that was easily understandable to frontend developers because they were used to non-blocking JavaScript in websites. They were used to handling buttons with on-click callbacks. Very similar to how servers are handled in Node, when you get a new connection, you get an on-connection callback. I think it’s clear these days that more can be done beyond just doing async I/O. I think this is pretty table stakes. Yet, developing servers is still hard. It’s still something that we’re stumbling across. We have a larger perspective these days on what it means to build an optimal server. It’s not just something that runs on your laptop, it’s something that needs to run across the entire world. Being able to configure that and manage that is a pretty complex situation. I’m sure you all have dealt with complex cloud configurations in some form or another. I’m sure many of you have dealt with Terraform. I am sure you have thought about how to get good latency worldwide. Having a single instance of your application server in U.S.-East is not appropriate for all applications. Sometimes you need to serve users locally, say in Tokyo, and you are bounded by speed of light problems. You have to be executing code close to where users reside.

There is just an insane amount of software and workflows and tool chains, especially in the JavaScript space where with Node, we created this small core and let a broad ecosystem develop around it. I think not an unreasonable theory, and certainly was pretty successful, but because of this wide range of tooling options in front of you when you sit down to program some server-side JavaScript, it takes a lot to understand what’s going on. Should you be using ESLint, or TSLint, or Prettier? There are all these unknown questions that make it so that you have to be an expert in the ecosystem in order to proceed correctly. Those are things that can be solved at the platform layer. Of course, I think we’ve all heard about the npm supply chain security problems where people can just publish any sort of code to the registry. There are all sorts of nasty effects where as soon as you link a dependency, you are potentially executing untrusted post install scripts on your local device, without you ever agreeing to that, without really understanding that. That is a very bad situation. We cannot be executing untrusted code from the internet. This is a very bad problem.

Deno: Next-Gen JavaScript Runtime

In some sense, Deno is a continuation of the Node project. It is pursuing the same goals, but just with an expanded perspective on what this means on what optimal servers actually entail. Deno is a next-generation JavaScript runtime. It is secure by default. It is, I say JavaScript runtime, because at the bottom layer, it is JavaScript. It treats TypeScript and JSX natively, so it really does understand TypeScript in particular, and understands the types, and can do type checking, and just handles that natively so that you are not left with trying to configure that on top of the platform. It has all sorts of tooling built into it, testing, linting, formatting, just to name a small subset. It is backwards compatible with Node and npm. It’s trying to thread this needle between keeping true to web standards, and really keeping the gap between browsers and server-side JavaScript as minimal as possible. Also being compatible with existing npm modules, existing Node software. That’s a difficult thing to do because the gap between the npm ecosystem, the Node ecosystem, and where web browser standards are specified is quite large.

Demo (Deno)

In Deno, you can run URLs directly from the command line. I’ll just type it up, https://deno.land/std@0.150.0/examples/gist.ts. Actually, let me curl this URL first, just to give you an idea. This is some source code. This is a little program that posts GISTs to GitHub. It takes a file as a parameter and uploads it. I’m going to run this file directly in a secure way with Deno, so I’m just going to Deno run that command line. What you’ll see is that, first of all, it doesn’t just execute this thing. You immediately hit this permission prompt situation where it says, Deno is trying to access this GIST token. I have an environment variable that allows me to post to my GitHub. Do you want to allow access or deny access to this? I will say yes. Then it fails because I haven’t actually provided a file name here. What I’m going to do is upload my etc password file, very secure. I will allow it to read the environment variable. Then you see that it’s trying to access the file etc password, I will allow that. Now it’s trying to post to api.github.com. I will allow that. Finally, it has actually uploaded the GIST, and luckily, my etc password file is shadowed here and doesn’t actually contain any secret information so it’s not such a big deal.

Just as an example of running programs securely, obviously, if it’s going to be accessing your operating system, and reading GIST tokens and whatnot, you need to opt into that. There is no secure way for a program to read environment variables and access the internet in some unbounded way. By restricting this and at least being able to observe and click through these things and agree, very much like in the web browser, when you access a website, and it’s like, I want to access your location API or access your webcam, in Deno you opt into operating system access. That is what is meant by secure by default and what is meant by running URLs directly. It’s not just URLs it can run, it can also execute npm scripts. You can do deno run npm:cowsay, is a little npm package that prints out some ASCII art. I’m just going to try to have it say cowsay hello. For some reason, the npm package cowsay wants access to my current working directory, so we’ll allow that. It’s also trying to read its own package JSON file, it’s a little bit weird. It’s also trying to read a .cow file, which I assume is the ASCII art. Once I grant that access, it can indeed print out this stuff. Of course, you can bypass all of these permission checks with allow read, which just allows you to read files from your disk without writing them, without getting network access, without getting environment variables. In this way you can execute things with some degree of confidence.

Deno is a single executable, so I will just demo that. It’s a relatively large executable, but are you ever going to know that if I didn’t tell you that? It’s 100 megabytes, but it contains quite a lot of functionality. It is pretty tight. It doesn’t link to too many system libraries. This is what we’ve got it linked to. It is this one file that you need to do all of your Typescript and JavaScript functionality. We do ship on Mac, Linux, and Windows all fully supported. I think it’s a nice property that this executable is all you need. You only need that one file. When you install Deno, you literally install a single executable file. I had mentioned that Deno takes web standards seriously. This is a subset of some of the APIs that Deno supports. Of course, globalThis, is this annoying name for the global scope that TC39 invented. That is the web standard. WebAssembly, of course. Web Crypto is supported. Web Workers are supported, that’s how you do parallel computation in Deno, just like in the browser. TransformStream, EventTarget, AbortController, the location object, FormData, Request and Response. Window variable, window.close, localStorage. It goes quite deep.

Just to demo this a little bit, I want to gzip a file with web standards here. Let me, deno init qcon10. I’m using 10 because I’ve been doing this for hours now. I think I’ve got a lot of QCon directories here. Let me open up VS Code and potentially make that big enough that you’re able to view this. What I’ve run is a deno init script that creates a few files here. What I can do is deno run main.ts, and that adds two numbers together. No big deal. I’m going to delete this stuff. What I want to do is actually open up two files. I’ll open up etc password. Then I’m going to open up a file, out.gz. I’m going to compress that text file into a gzip file, but using web streams here, so Deno.open etc/passwd is the first one. That’s an async operation, and I’m going to await it, top level await here. This is the source file that we’re reading from. Then I’m going to open up a destination file Deno.open out.gz, in the current directory, and we want this to be, write is true, and create is true. We’ll let Copilot do a lot of the work there. What we can do is this SRC file has a property on it, readable, which is actually a readable stream. I can do pipeThrough, again web standard APIs, new CompressionStream for gzip. The problem with Copilot is you can’t really trust it. Then we’re going to pipeThrough this gzip CompressionStream, and then pipe to the destination that’s writeable. This is a very web standard way of gzipping a file. Let’s try to run that guy. Of course, it’s trying to access etc password, so we have to allow that. Yes, it’s trying to get write access to this out.gz file. There. Hopefully, we’ve created an out.gz file, and hopefully that file is slightly less than this password file, which that appears to be the case. Web standard APIs are very deep. In particular, when it comes to streaming APIs and web servers, Deno has it all very deeply covered. If you want to stream a response and pipe it through something else, all of this is dealt with, with standard APIs.

I mentioned that Deno has Node compatibility. This was not an original goal of Deno. Deno set out with blazing new trails. What we found over time is that, actually, there is a lot of code that depends on npm and Node, and that people are unable to operate in this world effectively without being able to link to npm packages, and npm packages, of course, are built on top of Node built-in APIs. We actually have this all pretty much implemented at this point. There are, of course, many places where this falls through. Just as a demo of this, we’re going to import the Node file system, so import from node:fs, and I’ll do readFileSync. Before I was using the Deno.apis, some global object that provides all of the Deno APIs. Here, I’m just going to do readFileSync etc, I’ll use my same file here. This thing returns a Node buffer, not to be confused with a web standard buffer. Very different things. I’ll just console.log.out just as a demo of running a Node project. Of course, the security still applies here. Even though you’re going through the built-in Node APIs, you do not actually have access to the file system without allowing that, so you have to grant that either through the permission prompt or through flags. There we go. We’ve read etc password now, and outputted a buffer. This fs API is rather superficial, but this goes pretty deep. It’s not 100% complete, but it is deep.

The Best Way to Build npm Packages: DNT

I do want to mention this project called DNT. DNT stands for Deno Node Transform. One of the big problems with npm packages is that you need to provide different transpilation targets. Are you building an ESM module? Are you building a common JS module? What engines are you targeting? It’s pretty hard to code that up properly by hand, there’s a lot of things to get wrong. At this point, we are thinking of treating npm packages as essentially compilation targets rather than things that people write by hand themselves, because it is so complex and so easy to get wrong. DNT takes care of this for you. It will output a proper npm module with common JS support and ESM support, and creates tests for you, and does this all in a cross-platform way that can be tested on Node, and can polyfill any things that are not in Node, can provide you with toolings there. I just wanted to advertise this, even if you’re not using Deno, this is a great tooling to actually distribute npm packages.

Express Demo

As a slightly more serious example, let’s do some Express coding here. In Deno, you don’t need to set up this package JSON file, you can just import things. You can use a package JSON file, but it’s boilerplate at some point. Let me do this, import express from npm:express, a slightly different import here. Then let me see here. I’m going to const app = express, and then app.listen is something, and console.log listening on http://localhost:3000. Just throw together my little Express app here. Let’s see if this thing is running. For some reason, it wants to access the CWD. I think this must be something weird with Express, also environment variables. Also, of course, it wants to listen on port 3000. I think once we’ve jumped through those hoops, we should be able to curl localhost at port 3000, and get Hello World. In order to not have to repeat that all the time, let me just add some command line flags here, –allow-read –allow-env –allow-net, not allow write. There is a couple of red squigglies here, because I’m using TypeScript, because this is transparent TypeScript support here, is giving me some errors, and saying, this response has an any type. The problem with Express because it’s a dated package is that it doesn’t actually distribute TypeScript types itself. We have this admittedly somewhat nasty little pragma that you need to do here, where you have to link this import statement with npm:types here. Modern npm packages will actually distribute types, and link them in the package JSON. This shouldn’t be necessary. Just because Express is so old, you have to give a hint to the compiler about how to do this.

We’re still getting one little squiggly here. Now we have our proper little Express server. In fact, let me just make this a little bit better. We’ve got this –watch command built into Deno, so let me just add my flags here to this deno task. Deno task is something like npm scripts, so I can just do deno task dev, and now it’s going to reload the server every time I change any files here. If I want to change that from Hello World, to just Hello, it will reload that. We’ve got our little Express server.

We’ll keep working with that example, but just as an interlude, here’s some of the tooling built into Deno. The one that I wanted to point out here is standalone executables. Let’s take this little Express server that we’ve built, and let’s try to deno compile main.ts. What is this going to do? This is going to take all of the dependencies, including all of the Express and npm dependencies, package it up into a single executable file, and output it so that we can distribute this single file that is this tiny little Express server. I’m going to do this. Looks like it worked. There’s my qcon10 executable. Now it’s still prompting me for permission. Actually, what I want to do is provide a little bit more –allow-net –allow-env –allow-read, provide these permissions to the compiled output so that we can actually run this without flags. Now we’ve got this little distributable qcon10 file that is an ARM64 Macintosh executable here. You can package this up and send it around, it is hermetic. It will persist into the future. It doesn’t depend on any external things. It is not downloading packages from npm. It includes all of the dependencies there. What’s super cool about this, though, is that we can do this trick with target. Target allows you to cross-compile for different platforms. Let’s say that we wanted to actually output qcon10.exe for distribution on Windows. We’ll provide a target and we’ll do qcon10.exe. There we go, we’ve got our, hopefully, qcon10.exe. We’ve got our exe file that should be executable on MS Windows that contains all of these dependencies, just like the previous one. Similarly, you can target Linux, of course. This is extremely convenient for distributing code. You do not want to distribute a tarball with all of these dependencies. You want to email somebody one file and say execute this thing.

Deno Deploy: The Easiest Serverless Platform

Deno is a company. We are also building a commercial product. All of this stuff that I’ve been showing you, is all open source. Of course, MIT licensed, very free stuff. We take that infrastructure, combine it with some proprietary stuff, and we are building a serverless platform. This is the easiest serverless platform you have ever seen. It is quite robust. It is, first and foremost, serverless, and so scales to zero is a quite important goal for anything that we’re developing inside Deno Deploy. It, of course, supports npm packages, and Deno software in general. It has built-in storage and compute, which is really interesting when you start getting into this. Those also have serverless aspects to them. It is low latency everywhere. If you have a user in Japan, when you deploy to Deno Deploy, they will get served locally in Japan, rather than having to make a round the world hop to your application server. It is focused on JavaScript. Because of that, we can deliver very fast cold start times. We are just starting up a little isolate. This is not a general-purpose serverless system like lambda. I started this talk by saying that JavaScript has a future unlike all other technologies. Maybe semiconductor technology is going to be here 5 years from now. In these kinds of high-level technologies, JavaScript is pretty unique in that we can say that it is going to be there in the future. That is why we feel comfortable building a system specifically for JavaScript. This is a platform unlike say Python. It does have different things. This system is production ready and serving real traffic. For example, Netlify Edge Functions is built on top of Deno Deploy. This is actually supporting quite a lot of load.

Let’s try to take our Express server and deploy it to Deno Deploy and see what that’s like. What I’m going to do is go to the website, dash.deno.com. This is what Deno Deploy looks like. I’m going to create a new project. I’m going to create a new blank project here. It’s given me a name, hungry-boar-45. For good memory, we’ll call it qcon10. Now I have to deploy it. I can go look up the command for this, but I know what it is, deployctl. What I’m going to do is just say, give it the project name, deployctl deploy, and then I have to give it the main file that it’s going to run. I’ll give this main.ts. Before I execute this, let me just delete these executables that I outputted here, so it doesn’t inadvertently upload those. Let me also just give the prod flag. This is uploading my JavaScript code to Deno Deploy, and making a deployment. Before you know it, you’ve got this subdomain, which we can curl and gives me a Hello response. This is maybe seemingly quite trivial, but this is deployed worldwide in the time that we run this command. When you access this subdomain, and if you access this 2 years from now, hopefully this will continue to be persisted. This is running in a serverless way. Costs us essentially nothing to run a site that gets no traffic.

Deno KV: Simple Data Storage for JavaScript

To take this example maybe one step further, I mentioned that this has some state involved here. What we’re doing right now is just responding with Hello. Let’s make this slightly more realistic. Obviously, real websites are going to be accessing databases and whatnot. You can, of course, connect to your Postgres database. You can, of course, connect to MongoDB. What we’ve added in Deno Deploy is this Deno KV, which is a very simple serverless database that is geared towards application state, specifically in JavaScript. It is very critically zero configuration to set up. It will just be there out of the box. It does have ACID transactions, so you can use this for real consistent state. It scales to zero, of course. It’s built into Deno Deploy. Under the hood, it’s FoundationDB, which is a very robust industrial database that is running, for example, Snowflake, and iCloud at Apple, a very serious key-value database that provides all of these primitives. We, of course, are not engineering a database from scratch. That would be absolute insanity. What we are trying to do is provide a very simple state mechanism that you can use in JavaScript, for very simple states that you might need to persist. You would be surprised at how useful this actually is. It’s a key-value database, and it operates on JavaScript objects. It’s important to note that the keys are not strings, but JavaScript arrays. You can think of it as like a directory structure, a hierarchical directory. Those keys can be strings, or the values in those arrays can be strings, or integers, or actually byte arrays. The values can be any JavaScript object, not just a JSON serializable JavaScript object. You can dump date objects in there. You can dump BigInts. You can dump Uint8Arrays. It’s very transparently nice.

Let’s try to add a little bit of state to the system. What I’m going to do is first of all open the database, so Deno.openKv is the access to this, and you have to await that. This is how you access it, and you do const kv equals that. I’m getting a little red squiggly line around this, because it’s like, what is this openKv thing? This stuff is still in beta and behind the unstable flag. I do need to go into my settings and enable the unstable APIs in Deno. I just click this thing, and it should go away. What I want to do is just set a real simple value here. As I said, the keys are arrays. I’m going to just start a key that’s called counter, which is an array with a single string in it. I’m going to set this to the value 0. I have to await this. Then I’m going to get this counter and get back 0, so const v, and I’ll just console.log that. Let’s run this file again and see if I haven’t really messed this up, so deno task dev. It’s giving me another warning here, openKv is not a function, again, this is because I don’t have the unstable flag. Let me just add unstable in here and try this again. There we go. Now when I run this file, at the top level, I’ve stored a key and gotten a value back out of it. That’s interesting. Let me just give a name to this key here, and delete this code. Actually, let me save some of this code.

Let me get this counter out of here and return it in the response of this Express server here. I’m going to await this kv.get, and then I’m going to return Hello, maybe I’ll say counter, with the value of that counter. I’ve got a little red squiggly here, and it’s saying that await expressions can only be allowed inside async functions. Got to turn this into an async. That should make it happy. Let’s just test it, curl localhost. I’m getting counter is zero. Then the question is like, how do I increment this counter? We’re going to deploy this to Deno Deploy, where it’s going to be propagated all around the world. How are we going to increment that counter? Let me show you. You can do atomic transactions. I mentioned ACID transactions in KV. You can do kv.atomic sum, we’re going to increment this counter key that we’ve got. We’ll increment it by 1. Let’s see if that works. We’ll curl this, and it is not working. We have to commit the transaction, so curl. It has crashed, because this is a non-U64 value in the database. Let me use key there, and let’s just use a different value here. Counter 1, counter 2, counter 3, so we are now incrementing this value from the database. My claim here is that, although this is just a counter, can now be deployed to this serverless environment. I’ll just run the same command that I did before, this deploy CTL thing. We’ll do deployctl deploy prod project equals qcon10 main.ts.

My claim here is that this is, although, a very simple server still, somewhat stateful, so every time I’m accessing this qcon, the counter is increasing quite rapidly. This thing is deployed worldwide. If I go into my project page up here on Deno Deploy, you can get this KV tab, and inside of this you can see some details about this. The right region for the KV values is in U.S.-East. There are read replicas for this. Actually, the only one that’s turned on right now is U.S.-East-4, but we’ve got two others available. We could turn on the Los Angeles read replica. In fact, let’s just do that, it takes a second. This will replicate the data to Los Angeles so that when we read from that value, we are getting something. I’ll leave it at that. It’s suffice to say that this is pretty useful in many situations where you need to share state between different isolates running in different regions. I don’t think it takes the place of a real relational database. If you have a user’s table and that sort of stuff, you probably want to manage that with migrations. There are many situations in particular, like session storage where something like this becomes very valuable.

Deno Queues: Simple At Least Once Delivery

We also have KV queues. This is a simple way of doing at least once delivery. This is, in particular, very useful for webhooks, where you don’t want to do a bunch of computation inside the webhook. What you want to do is queue up something and do that asynchronous from the webhook request. I’ll just leave the link, https://deno.com/blog/queues.

Questions and Answers

Participant 1: Can I get your spicy take on the class undone?

Dahl: JavaScript is super important. There is a lot of room for improvement here. Deno started off, as I mentioned, not Node compatible, and we are operating in a world, pushing out what is possible beyond npm. I think the Bun project has created some good competition in that sense. It’s just like, no, actually, this is very important for stuff. I think that pushes our work in Deno to be receptive to the needs of npm users. We can’t just elbow our way into a developer community and say that you need to use HTTPS imports for everything. Very good in that sense. I really like Rust. I’m very confident that a very heavily typed, very complex system like what we are building, is manageable quite well in Rust. We continue to develop with urgency, and we will see where this goes in the future.

Participant 2: I’m curious if you’ve thought about the psychology of the permission system in terms of users or developers will just say allow all. How do you deal with that decision fatigue?

Dahl: Of course, when you’re developing yourself on some software, you’re just going to do allow all, because you know what you’re running. The JavaScript V8 engine provides a very secure sandbox. I think it’s a real loss if you just throw that away and say like, no, you have unrestricted access to the operating system, bar nothing. It gets in your way, but I think that’s good. Sure, maybe it introduces decision fatigue and maybe people do allow all. That’s their problem. Better to have the choice than to not. I think there’s work that we can do in the ergonomics of it to make it a little bit more usable and user friendly. Generally, we are pretty confident with our secure by default design.

Participant 3: The slide where you were talking about supporting web standard APIs, that makes a ton of sense. I read between the lines as that being a goal to allow people to use the same code in the browser, but they were even better than in some cases, which seems like a noble goal. Then your import statements are a little bit different, so they wouldn’t work in a browser. I’m curious like how those pieces fit together.

Dahl: We are touching the file system. We are creating servers. We’re doing all sorts of stuff that is not going to run in the web browser. We’re writing in TypeScript in particular, that is not going to run in the web browser. Just because there is something that you’re doing differently on server-side JavaScript, doesn’t mean we need to reinvent the world. I think a big problem in the Node world, for example, has been the HTTP request API. When I made that API there was no fetch. I just invented it. Let’s import HTTP request. That was fine until fetch got invented. Then there was this huge gap for many years. These days Node is fixing things, I think in the next release actually fetch is going to be very well supported. I think, just because you can’t run TypeScript in the web browser, or you don’t have npm imports, doesn’t mean that we should just throw off all compatibility. Think of it as a Venn diagram. We want that Venn diagram to be as close as possible. Yes, of course, it’s a server-side JavaScript system, it is not going to do exactly the same things as browsers. Let’s not just make it two distinct, separate systems. That creates a lot of confusion for users.

Participant 4: I’m wondering if the mKV would be part of the open source implementation, or is that a commercial product?

Dahl: You saw me running it locally. The question is like, what is it doing there? Locally, it’s actually using a SQLite database, hidden behind the scenes. The purpose of that is for testing so that you can develop your KV applications and check the semantics of it, run your unit tests, without having to set up some mock KV server. That is all open source and fine. You’re free to take that and do what you want. Obviously, you can ignore it if you want to. The FoundationDB access comes when you deploy to Deno Deploy. There are ways to connect, for example, Node to the hosted FoundationDB, even if you’re not running in Deno Deploy, that’s called KV connect. It is a hosted solution, and it is a proprietary commercial operation. First of all, you’re free to ignore it. You can run it locally. The SQLite version of it is open source.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Dagger Enables Developer Functions for CI/CD and Opens the Daggerverse

MMS Founder
MMS Matt Saunders

Article originally posted on InfoQ. Visit InfoQ

The open-source Dagger project, which aims to be “CI/CD as code that runs anywhere,” recently released version 0.10. This release introduces custom Dagger Functions, a feature that simplifies CI scripts while expanding possibilities for developers seeking cleaner, more efficient pipelines. Also announced is the Daggerverse – a searchable index for public Dagger Functions.

Dagger Functions are the interface to the fundamental operations in Dagger, as each core operation is exposed via the API as a function. The recent release adds capabilities to allow developers to write their own functions, package them as reusable modules, and call them directly from the CLI. The new release adds the following new functionality:

Custom Function Authorship: This feature enables developers to create their own Dagger Functions, extending the API’s capabilities infinitely. Developers can write functions using the Dagger SDK, which allows them to code Dagger Functions in languages like Go, Python, and TypeScript. The Dagger Engine compiles developers’ functions into a specialized container at runtime, exposing a custom GraphQL API for invocation. This makes the functions immediately composable into dynamic pipelines, just like core Dagger functionality. Functions can call other functions, and they don’t all have to be written in the same language.

Reusable Modules: Dagger Functions are designed for safe reuse, fostering collaboration within the community. Developers can easily share and consume functions packaged into Dagger Modules. These modules are hosted as source code in a Git repository, ensuring decentralized distribution, version control, and dependency management, and without opinions about repository layout. When the modules are run, they are built locally. Semantic versioning is accommodated, and dependencies are pinned by default.

CLI Integration: Dagger Functions can be called directly from the command line interface using the Dagger CLI tool. This allows developers to run functions either from local storage or directly from a Git repository. The CLI introspects the module’s API, and exposes available functions and arguments, streamlining the invocation process.

Shortly after the release of Dagger Functions, Dagger also announced the Daggerverse – a searchable index of publically-available functions intended to allow developers to discover great modules provided by the community. This provides easy access to modules for common tasks such as linting, building, security scanning, secrets integration and deployment to popular cloud providers.

Dagger Functions are initially targeted towards CI optimization, but their versatility could extend beyond traditional CI workflows. Potential applications include test data management and SaaS integration. The Dagger team anticipates broader adoption across various development domains. Reaction from the community has been positive; writing on X (formerly Twitter), Tom Hacohen says:

“I first saw a demo of Dagger two years ago and you could sense something big was brewing. Now with Dagger functions […] it really feels like something we’ll all be using”.

The Dagger team suggests starting a migration by replacing cumbersome scripts and incrementally integrating Dagger Functions into projects. Quickstart Guides and Developer Module Guides cater to both newcomers and seasoned users, ensuring a smooth onboarding experience.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Terraform 1.7 Adds Config-Driven Remove and Test Mocking Ahead of OpenTofu

MMS Founder
MMS Rafal Gancarz

Article originally posted on InfoQ. Visit InfoQ

Hashicorp announced the release of Terraform 1.7, a new version of the popular Infrastructure as Code (IaC) tool. Terraform now supports config-driven remove capability, a safer way to remove resources from the managed stack’s state data. The new version also comes with mock providers and overrides, as well as several other enhancements in the test framework.

Terraform previously introduced config-driven blocks for moving and importing resources. With the 1.7 release, a new removed block is available for resources and modules that can be used for state removal as part of the Terraform plan and apply commands. This approach is preferable to using terraform state rm command, as using the command requires manual tasks on individual resources.

removed {
  # The resource address to remove from state
  from = aws_instance.example

  # The lifecycle block instructs Terraform not to destroy the underlying resource
  lifecycle {
    destroy = false
  }
}

Terraform 1.7 also includes an enhancement for config-driven import. It is possible now to use for_each loops inside import blocks. The new approach allows importing multiple instances of the resource or a module in a single block.

In the earlier 1.6 version, the Terraform team released a new testing framework, which allows for creating and running unit and integration tests validating Terraform configurations. The new version enhances the testing framework, especially with a new mocking capability.

Dan Barr, technical product marketing manager at Hashicorp, explains the benefits of using mocking in Terraform tests:

Previously, all tests were executed by making actual provider calls using either a plan or apply operation. This is a great way to observe the real behavior of a module. But it can also be useful to mock provider calls to model more advanced situations and to test without having to create actual infrastructure or requiring credentials. This can be especially useful with cloud resources that take a long time to provision, such as databases and higher-level platform services. Mocking can significantly reduce the time required to run a test suite with many different permutations, giving module authors the ability to thoroughly test their code without slowing down the development process.

Engineers using Terraform can now mock providers and resources to generate fake data for all computed attributes. The new mock_provider block offers the ability to specify values for computed attributes of resources and data sources. Test suite creators can mix mocked and real providers to create flexible test setups.

mock_provider "aws" {
  mock_resource "aws_s3_bucket" {
    defaults = {
      arn = "arn:aws:s3:::test-bucket-name"
    }
  }
}

run "sets_bucket_name" {
  variables {
    bucket_name = "test-bucket-name"
  }

  # Validates a known attribute set in the resource configuration
  assert {
    condition     = output.bucket == "test-bucket-name"
    error_message = "Wrong ARN value"
  }

  # Validates a computed attribute using the mocked resource
  assert {
    condition     = output.arn == "arn:aws:s3:::test-bucket-name"
    error_message = "Wrong ARN value"
  }
}

The new Terraform version additionally supports overrides that allow replacing specific instances of resources, data sources, and modules for testing purposes. Overrides allow engineers to reduce the execution time of resource provisioning or child module execution and simulate different outputs to exercise different test scenarios comprehensively.

mock_provider "aws" {}

override_module {
  target = module.big_database
  outputs = {
    endpoint = "big_database.012345678901.us-east-1.rds.amazonaws.com:3306"
    db_name  = "test_db"
    username = "fakeuser"
    password = "fakepassword"
  }
}

run "test" {
  assert {
    condition     = module.big_database.username == "fakeuser"
    error_message = "Incorrect username"
  }
}

The latest version ships with further enhancements to the testing framework, including the ability to reference variables and run outputs in test provider blocks, use HashiCorp Configuration Language (HCL) functions in variable and provider blocks, and load variable values for tests from *.tfvars files.

Terraform 1.7 can be downloaded from the Hashicorp website, and the full changelog and the upgrade guide are available on GitHub.

OpenTofu, the fork of Terraform, created in response to HishiCorp’s decision to adopt the Business Source Licence, also included the module testing feature in its first GA release (1.6). According to the roadmap, the project plans to support mocking providers and overrides, as well as config-driven remove feature, in the subsequent 1.7 release. Additionally, OpenTofu focuses on shipping state encryption, a highly anticipated capability within the community, as a flagship feature of the new release.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Insider Sell: Couchbase Inc (BASE) SVP & Chief Revenue Officer Huw Owen Disposes of Shares

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Couchbase Inc (NASDAQ:BASE), a company specializing in modern database solutions for enterprise applications, has reported an insider sell according to the latest SEC filings. SVP & Chief Revenue Officer Huw Owen sold 11,581 shares of the company on March 22, 2024. The transaction was disclosed in an SEC Filing.Over the past year, Huw Owen has been active in the market, selling a total of 156,707 shares and making no purchases of the company’s stock. This latest transaction continues the trend of insider sells at Couchbase Inc, with a total of 45 insider sells and no insider buys occurring over the past year.

Insider Sell: Couchbase Inc (BASE) SVP & Chief Revenue Officer Huw Owen Disposes of SharesInsider Sell: Couchbase Inc (BASE) SVP & Chief Revenue Officer Huw Owen Disposes of Shares

Insider Sell: Couchbase Inc (BASE) SVP & Chief Revenue Officer Huw Owen Disposes of Shares

On the day of the insider’s recent sell, shares of Couchbase Inc were trading at $26.79. The company’s market cap stood at $1.2489 billion, reflecting its valuation at the time of the transaction.Couchbase Inc is known for its engagement in the development of database technology that allows enterprises to leverage the power of NoSQL to create and manage dynamic web, mobile, and IoT applications. The company’s platform is designed to provide agility, manageability, and performance at scale, which is critical for businesses operating in the digital economy.The insider’s trading activity is often closely watched by investors as it can provide insights into the company’s performance and insider perspectives on the stock’s value. However, it is important to note that insider transactions may not always be indicative of future stock performance and can be influenced by a variety of factors, including personal financial needs and portfolio diversification strategies.

This article, generated by GuruFocus, is designed to provide general insights and is not tailored financial advice. Our commentary is rooted in historical data and analyst projections, utilizing an impartial methodology, and is not intended to serve as specific investment guidance. It does not formulate a recommendation to purchase or divest any stock and does not consider individual investment objectives or financial circumstances. Our objective is to deliver long-term, fundamental data-driven analysis. Be aware that our analysis might not incorporate the most recent, price-sensitive company announcements or qualitative information. GuruFocus holds no position in the stocks mentioned herein.

This article first appeared on GuruFocus.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Should Oracle Buy MongoDB? – Yahoo News Singapore

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Originally published by Sramana Mitra on LinkedIn: Should Oracle Buy MongoDB?

According to a Market Research Media report published earlier this year, the global NoSQL market is estimated to grow 21% annually to $3.4 billion by 2024. Analysts believe that with unstructured data having surpassed structured data globally, NoSQL will be the primary DBMS platform going forward. Billion Dollar Unicorn MongoDB (Nasdaq:MDB) is already delivering results on this high growth trend.

MongoDB’s Financials

Earlier this month, MongoDB announced its second quarter revenues that surpassed market expectations. Revenues of $57.5 million grew 61% over the year with Subscription revenues growing 63% to $52.9 million. Services revenues grew 48% to $4.6 million. Net loss grew from $26.1 million a year ago to $31.2 million, or $0.61 per share. Non-GAAP net loss reduced marginally from $21.2 million a year ago to $21.1 million, or $0.41 per share. The Street had forecast revenues of $51.7 million for the quarter with a loss of $0.45 per share.

For the current quarter, it expects revenues of $59-$60 million with a Non-GAAP net loss of $0.40-$0.38 per share. The market was looking for revenues of $59.92 million for the quarter with a loss of $0.40 per share. The company expects to end the current year with revenues of $228-$230 million and a net loss of $1.66-$1.62 per share. The market was forecasting revenues of $229.9 million with a net loss of $1.62 per share.

MongoDB’s Product Growth

During the quarter, MongoDB announced the release of MongoDB 4.0. The upgraded platform continues to be developer-friendly with the introduction of the Stitch feature and the availability of multi-document ACID transactions. Stitch allows for serverless applications so that developers can build value-added code without having to write back-end code for an application server. Without Stitch, an application would need to reside inside a server in a data center. To query a database like this would require developers to go through the server. According to MongoDb, this feature would make the service a better choice for mission critical use cases. The upgraded platform is also ACID compliant among multiple documents – which is a big win for a database offering. ACID or Atomicity, Consistency, Isolation, Durability is a database compliance standard that guarantees data integrity in transactions-based businesses.

MongoDB is also seeing significant traction for its Atlas offering. Atlas is its managed cloud database-as-a-service offering that added nearly 1,000 customers in the quarter. Atlas revenue grew 400% over the year and accounted for nearly 18% of total revenue for the quarter. MongoDB Atlas allows its customers to easily create sophisticated policies and distribute data closer to their users for low-latency performance. More recently, MongoDB Atlas was made available in Montreal and the Netherlands. The service is now available in 15 global Google Cloud regions.

MongoDB’s products are clearly very developer friendly. With annual revenues of nearly $230 million, it is still a tiny fraction of the multi-billion dollar NoSQL database market. According to TrustRadius, MongoDB is among the top rated NoSQL database providers. But the competition in the industry is strong with players like Cassandra, Redis, and Amazon’s DynamoDB in the race as well.

MongoDB has been slowly chipping away in the database market. Some analysts believe that it was eating into Oracle’s market share in the $40 billion market by offering a service that is more flexible, and costs much less. Oracle does not pay much attention to MongoDB though. Last year, Oracle’s co-CEO, Mark Hurd, had explained in another interview how he wasn’t worried about MongoDB since it was a much smaller company in the market. According to him, there are several more features in SQL databases that are not yet available in MongoDB’s offering. But that, in my opinion, is not such a difficult gap to fill. I think that given the current cloud hanging over Oracle, it may be a good idea for it to consider adding MongoDB to its portfolio. MongoDB’s addition will give Oracle the break it is looking for by adding NoSQL capabilities and help it put together a stronger defense against Amazon’s offering – a rival it is constantly at war of words with.

I would like to know if MongoDB’s latest upgrade makes it a more preferred offering among the users? Database customers are sticky customers and don’t switch too often, and too easily. In your experience, what is it that MongoDB will need to add to its services to make you make that switch?

MongoDB went public in October last year when it raised $256 million by selling shares at $24 each. The stock has had a stellar run so far. It is currently trading at $75.10 with a market capitalization of $3.92 billion. It had touched a 52-week high of $85.25 earlier this month. It has been climbing up from the 52-week low of $24.62 it had fallen to in December last year. The company’s capitalization has soared from $1.6 billion that it was valued at when the stock had listed. Prior to the listing, MongoDB was privately held and had raised $311 million from investors including Union Square Ventures, Flybridge Capital Partners, Sequoia Capital, Red Hat, Intel Capital, In-Q-Tel, New Enterprise Associates, EMC, Salesforce, Fidelity Investments, T. Rowe Price, and Altimeter Capital. Prior to the listing, MongoDB had been valued at $1.8 billion.

Looking For Some Hands-On Advice?

For entrepreneurs who want to discuss their specific businesses with me, I’m very happy to assess your situation during my free online 1Mby1M Roundtables, held almost every week. You can also check out my LinkedIn Learning course here, my Lynda.com Bootstrapping course here, and follow my writings here.

Photo credit: Garrett Heath/Flickr.com.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Simplify private connectivity to Amazon DynamoDB with AWS PrivateLink

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Amazon DynamoDB is a serverless, NoSQL, fully-managed database that delivers single-digit millisecond performance at any scale. It’s a multi-Region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

Customers can access DynamoDB from their VPC or from workloads that run on-premises with gateway endpoints. For on-premises private network connectivity to gateway endpoints, customers often set up proxy servers or firewall rules to route and restrict traffic to DynamoDB, as gateway endpoints are not compatible with AWS Direct Connect or AWS Virtual Private Network (AWS VPN). The additional infrastructure setup can increase operational load and complexity, while complicating compliance in some cases. Customers have asked for a solution that enables setting up private network connectivity for on-premises workloads with DynamoDB, without needing additional proxy infrastructure.

We are excited to announce AWS PrivateLink support for DynamoDB. With PrivateLink, you can simplify private network connectivity from on-premises workloads to DynamoDB using interface VPC endpoints and private IP addresses. Interface endpoints are compatible with Direct Connect and AWS VPN for end-to-end private network connectivity. As a result, you can eliminate the need for public IP addresses, proxy infrastructure, or firewall rules for on-premises access to DynamoDB, and maintain compliance. You can drive low-cost private network connectivity by using gateway endpoints for in-VPC network traffic and interface endpoints for on-premises network traffic to DynamoDB.

In this post, we illustrate the use of a PrivateLink-backed interface VPC endpoint with DynamoDB by emulating an on-premises environment. For more examples of using interface endpoints with PrivateLink, refer to use case examples.

On-premises access to DynamoDB using private IP addresses

For this post, let’s assume that an insurance company needs to access the quotes and claims data stored in DynamoDB to re-evaluate their risk scores stored on their on-premises mainframe systems.

The on-premises workload is a local machine that connects to the VPC in the us-west-1 Region through AWS Client VPN, and uses the interface endpoint backed by PrivateLink to interact with DynamoDB using private IP addresses. The following diagram illustrates the architecture of the setup.

The solution contains the following key components:

  • An AWS Client VPN endpoint is associated with the VPC
  • OpenVPN-based AWS Client VPN is configured on the local machine to connect to the VPC using the AWS Client VPN endpoint
  • A PrivateLink-backed interface VPC endpoint for DynamoDB is created and associated with the subnets of the VPC
  • On-premises applications running on the (local) machine can access DynamoDB tables privately using the interface VPC endpoint

In the following sections, we configure this setup and create a new PrivateLink-backed interface VPC endpoint.

Prerequisites

To get started, make sure that you have a network setup as follows:

  • A VPC in a Region with AWS Client VPN set up with a local machine
  • The interface endpoint’s security group is the same as the on-premises environment or includes an inbound rule for receiving traffic from the on-premises environment (AWS Client VPN endpoint)
  • If using AWS Client VPN, authorization rules of the AWS Client VPN endpoint allow traffic to the Classless Inter-Domain Routing (CIDR) blocks of subnets that the Interface endpoint is associated with
  • Optionally, Python3 installed on a local machine with the AWS SDK for Python (Boto3) installed

Configure the solution

Complete the following steps using Amazon Virtual Private Cloud (Amazon VPC) to create a PrivateLink-backed interface VPC endpoint for DynamoDB in a VPC in the us-west-1 Region:

    1. Navigate to Amazon VPC and choose Endpoints in the navigation pane.
    2. Choose Create endpoint.
    3. For Name tag, enter an optional tag.
    4. For Service category, select AWS services.
    5. Search for and select the DynamoDB interface type endpoint. This is the PrivateLink-backed interface VPC endpoint for DynamoDB.
    6. Because interface endpoints are associated with VPCs, choose your intended VPC, the subnets within the VPC that you would like to associate the interface endpoint with, and the security group you would like to configure.
    7. Provide a VPC endpoint policy to restrict DynamoDB actions that are allowed for AWS Identity and Access Management (IAM) entities interacting with DynamoDB using the interface VPC endpoint. For this post, select Full access.
      It is recommended to narrow down access within this policy based on the principle of least privilege access.
    8. Choose Create endpoint.
      The endpoint creation process may take a few minutes.

      After the interface endpoint is successfully created, you will see multiple DNS names for the interface VPC endpoint, which will be specific to your VPC. The DNS names may contain a single entry for the Regional endpoint, and individual zonal entries for each Availability Zone that your configured subnets belong to.
    9. Copy the Regional DNS name to use in your application on the local machine connected to the VPC using AWS Client VPN.
    10. Pass the DNS name while initializing Boto3 SDK’s DynamoDB client as endpoint_url along with the appropriate region_name. This may be different for different AWS SDKs.
import boto3

ddb_client = boto3.client(
    "dynamodb",
    region_name="us-west-1",
    endpoint_url="https://vpce-xxxx-yyyy.dynamodb.us-west-1.vpce.amazonaws.com",
)

response = ddb_client.get_item(
    TableName="plays",
    Key={"pk": {"S": "64.0"}, "sk": {"S": "2014-01-02T09:44:24Z"}},
)

print(response["Item"])

Output:
{'sk': {'S': '2014-01-02T09:44:24Z'}, 'data': {'S': '208356596'}, 'pk': {'S': '64.0'}, 'type': {'S': 'sample'}}

Cross-Region DynamoDB access using private IP addresses

Similar to the on-premises to VPC using AWS Client VPN scenario, you can also use PrivateLink-backed interface VPC endpoints to privately access cross-Region DynamoDB resources through private IP addresses. You must have the two VPCs peered and route tables update appropriately. The architecture is illustrated below.

In this case, we have a VPC-based AWS Lambda application in the us-east-1 Region that is able to access the DynamoDB table in the us-west-1 Region using the PrivateLink-backed interface VPC endpoint. The Lambda function is able to access cross-Region DynamoDB resources using the PrivateLink-backed interface VPC endpoint.

Access resources using private IP addresses

A key advantage of interface endpoints is that they resolve to private IP addresses within the specific subnets of the associated VPC. For instance, if the VPC has a CIDR range of 172.31.0.0/16 with two subnets, the interface endpoint resolves to IP addresses within this range, so at least one IP address for each subnet associated with the endpoint. See the following code:

$ dig vpce-xxxx-yyyy.dynamodb.us-west-1.vpce.amazonaws.com

; <> DiG 9.10.6 <> vpce-xxxx-yyyy.dynamodb.us-west-1.vpce.amazonaws.com
...
;; QUESTION SECTION:
;vpce-xxxx-yyyy.dynamodb.us-west-1.vpce.amazonaws.com. IN A

;; ANSWER SECTION:
vpce-xxxx-yyyy.dynamodb.us-west-1.vpce.amazonaws.com. 60 IN A 172.31.8.44
vpce-xxxx-yyyy.dynamodb.us-west-1.vpce.amazonaws.com. 60 IN A 172.31.16.71

Although the DNS name resolves publicly, because it resolves to private IP addresses that belong to the subnets of the VPC, the interface endpoint can’t be used to connect to DynamoDB from the internet, providing private access from end to end. In an on-premises setup, any traffic that gets routed to the VPC using AWS Direct Connect or AWS Client VPN can be configured to route to the interface endpoint seamlessly.

On-premises networks

To set up interface endpoints for DynamoDB when using on-premises applications, you need a Direct Connect or VPN solution to establish connectivity with the VPC. Additionally, make sure that a route is configured to the CIDR of the VPC to which the interface endpoint is associated. Furthermore, configure the security group of the interface endpoint to include an inbound rule from the on-premises network CIDR. Lastly, ensure that DNS configuration is implemented on-premises to resolve the DNS name of the interface endpoint, which is resolvable under the DynamoDB public DNS domain. For further details on network-to-VPC connectivity, refer to Network-to-Amazon VPC connectivity options.

The following diagram illustrates how a PrivateLink-backed interface VPC endpoint can facilitate the connection between on-premises applications and DynamoDB tables in the AWS Cloud. This setup also incorporates the gateway endpoint for routing in-VPC traffic.

Considerations

Consider the following when using PrivateLink-backed interface endpoints:

  • Whether you opt for gateway endpoints and/or interface endpoints for DynamoDB, your network traffic remains within the AWS network in both scenarios. It is recommended to continue using gateway endpoints for interactions within a single VPC, or between VPCs, while using interface endpoints for interactions from on-premises data centers. As of this writing, interface endpoints support IPv4 addresses exclusively.
  • PrivateLink can handle up to 100Gbps per Availability Zone per endpoint. If your company’s data transfer load per second from the on-premises data center to the DynamoDB interface endpoint exceeds 100Gbps per AZ, you can configure additional interface endpoints to accommodate your projected data transfer requirements.
  • There are no data processing or hourly charges associated with using gateway VPC endpoints for DynamoDB. However, standard charges apply when using interface endpoints with PrivateLink. For more information, refer to AWS PrivateLink Pricing.

Clean up

If you followed along and created resources in your AWS account, delete the AWS Client VPN endpoint, interface VPC endpoint, Lambda function, and any other AWS resources created as part of this post.

Conclusion

PrivateLink for DynamoDB enables you to simplify your network architecture by establishing connections to DynamoDB from on-premises data centers or within AWS using private IP addresses within the VPC. PrivateLink also removes the requirement for public IP addresses, configuring firewall rules, or setting up an internet gateway to access DynamoDB from on-premises locations. This new feature is accessible in all AWS Commercial Regions.

Use the PrivateLink-backed interface VPC endpoint for DynamoDB and share your feedback in the comments section.


About the authors

Aman Dhingra is a DynamoDB Specialist Solutions Architect based out of Dublin, Ireland. He is passionate about distributed systems and has a background in big data & analytics technologies. As a DynamoDB Specialist Solutions Architect, Aman helps customers design, evaluate, and optimize their workloads backed by DynamoDB.

Ashwin Venkatesh is a Senior Product Manager for Amazon DynamoDB at Amazon Web Services, and is based out of Santa Clara, California. With 25+ years in product management and technology roles, Ashwin has a passion for engaging with customers to understand business use cases, defining strategy, working backwards to define new features that deliver long-term customer value, and having deep-dive discussions with technology peers. Outside work, Ashwin enjoys travel, sports and family events.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.