Mobile Monitoring Solutions

Search
Close this search box.

Leaked data from Shopify plugins developed by Saara

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news


Security

Wednesday, March 27, 2024

<!–

–>

The Cybernews research team exposed a data breach from a publicly accessible MongoDB database, tied to the Shopify plugin developers from Saara, the data included over 7M orders and sensitive customer information, plus the data was available for eight months and held for a ransom of .01 Bitcoin.

The Cybernews research team discovered that a vast amount of sensitive data of shoppers was exposed to threat actors by the e-commerce giant’s Shopify plugin developer Saara, with millions of orders being leaked.

Key findings from the Cybernews report, covering the data breach on the Shopify plugins developed by Saara

  • Researchers discovered a publicly accessible MongoDB database belonging to a US-based company, Saara, that is developing Shopify plugins.
  • The leaked database stored 25GB of data.
  • Leaked data was collected by plugins from over 1,800 Shopify stores using the company’s plugins.
  • It held data from more than 7.6 million individual orders, including sensitive customer data.
  • The data stayed up for grabs for eight months and was likely accessed by threat actors.
  • The database contained a ransom note demanding 0.01 in bitcoin (around $640), or the data would be released publicly.
     

Plugins confirmed as affected by the leak

Plugins confirmed as affected by the leak:

  • EcoReturns: for AI-powered returns 
  • WyseMe: to acquire top shoppers

Leaked data included:

  • Customer names 
  • Email addresses 
  • Phone numbers 
  • Addresses 
  • Information about ordered items 
  • Order tracking numbers and links 
  • IP addresses 
  • User agents
  • Partial payment information

Some of the online stores mostly affected by the leak: 

  • Snitch
  • Bliss Club
  • Steve Madden
  • The Tribe Concepts
  • Scoboo.in
  • OneOne Swimwear

Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • – Exclusive content from leaders in the industry
  • – Q&A articles from industry leaders
  • – Tips and tricks from the most successful developers weekly
  • – Monthly issues, including all 90+ back-issues since 2012
  • – Event discounts and early-bird signups
  • – Gain insight from top achievers in the app store
  • – Learn what tools to use, what SDK’s to use, and more

    Subscribe here

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Ridge Holland update – Gerweck.net

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

On last night’s episode of NXT, Ridge Holland addressed the WWE Universe. He acknowledged that his performances over the past few weeks fell short of his usual standards. Holland emphasized the significance of his family in his life. Recognizing that his focus and passion were wavering, he made the difficult decision to step away from in-ring competition indefinitely.

Ridge Holland was then moved to the Alumni section on WWE’s official website. Fightful Select’s Corey Brennan has since reported that this is indeed storyline. He said “NXT sources have confirmed to Corey Brennan of Fightful Select that this is part of Holland’s current storyline and is not an official retirement. The story has been one that Holland has been motivated to do, with the former Brawling Brute being receptive to suggestions.”

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DigitalOcean Introduces CPU-based Autoscaling for its App Plaform

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

DigitalOcean has launched automatic horizontal scaling for its App Platform PaaS, aiming to free developers from the burden of scaling services up or down based on CPU load all by themselves.

This capability helps ensure that your applications can seamlessly handle fluctuating demand while optimizing resource usage and minimizing costs. You can configure autoscaling using either the user interface or via appspec.

Besides simplifying the creation of scalable services on their managed platform, the new capability should also help optimize performance and cost, says DigitalOcean.

The autoscaling capability will continuously collect CPU metrics and compare the average CPU utilization across all containers against a pre-defined threshold. When CPU utilization over a given period exceeds the threshold, the current deployment is cloned to create new container instances. When CPU utilization falls under the threshold, instead, the system will automatically remove container instances. Users have the option to set the threshold and limit the maximum as well as the minimum number of container instances that are allowed to run at any given time.

Autoscaling is always available for any component with dedicated instances. To enable it, you only need to define the maximum and minimum instance count and a CPU utilization threshold. The following snippet shows how you can define those values in an appspec file:

...
services:
- autoscaling:
    max_instance_count: 10
    min_instance_count: 2
    metrics:
      cpu:
        percent: 80
...

Once you have defined your appspec file, you can deploy that configuration using the DigitalOcean CLI with doctl apps update. Alternatively, you can use DigitalOcean API and provide all required configuration parameters in a JSON payload.

If you do not want to use the autoscaling capability, you must provide an instance_count value instead of max_instance_count and min_instance_count.

Appspec is a YAML-based format developers can use to configure apps running on DigitalOcean App Platform, including external resources as well as environment and configuration variables. It is also possible to configure an app using the Web-based interface and then download its appspec as a backup at a specific point in time.

App Platform is DigitalOcean platform-as-a-service (PAAS) solution that allows developers to create their deployment from a git repository or using pre-built container images, with the platform taking care of the entire application lifecycle.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Ridge Holland announces he is “stepping away from in-ring competition” – Gerweck.net

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Ridge Holland delivered an emotional speech in the ring, expressing the difficulty of what he’s about to say. He acknowledges the perception surrounding him and the impact of his job on his personal life. Holland reveals he’s had tough discussions with loved ones and himself, citing a lack of mental and physical resilience required for rugby and wrestling. In the best interest of his family, he announces his indefinite departure from in-ring competition.

Holland has been moved to the alumni section of WWE’s website.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to Build Your Own RAG System With LlamaIndex and MongoDB – Built In

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Large language models (LLMs) have provided new capabilities for modern applications, specifically, the ability to generate text in response to a prompt or user query. Common examples of LLMs include: GPT3.5, GPT-4, Bard and Llama, among others. To improve the capabilities of LLMs to respond to user queries using contextually relevant and accurate information, AI engineers and developers have developed two main approaches.

The first is to fine-tune the baseline LLM with propriety and context-relevant data. The second, and most cost-effective, approach is to connect the LLM to a data store with a retrieval model that extracts semantically relevant information from the database to add context to the LLM user input. This approach is referred to as retrieval augmented generation (RAG). 

Retrieval Augmented Generation (RAG) Explained

Retrieval augmented generation (RAG) systems connects an LLM to a data store with a retrieval model that returns semantically relevant results. It improves responses adding relevant information from information sources to user queries. RAG systems are a cost-effective approach to developing LLM applications that provide up-to-date and relevant information compared to fine-tuning.

RAG systems are an improvement on fine-tuning models for the following reasons:

  • Fine-tuning requires a substantial amount of proprietary, domain-specific data, which can be resource-intensive to collect and prepare. 
  • Fine-tuning an LLM for every new context or application demands considerable computational resources and time, making it a less scalable solution for applications needing to adapt to various domains or data sets swiftly.

What Is Retrieval Augmented Generation (RAG)?

RAG is a system design pattern that leverages information retrieval techniques and generative AI models to provide accurate and relevant responses to user queries. It does so by retrieving semantically relevant data to supplement user queries by additional context, combined as input to LLMs.

Illustrated process of a RAG system.
Retrieval augmented generation process. | Image: Richmond Alake

The RAG architectural design pattern for LLM applications enhances the relevance and accuracy of LLM responses by incorporating context from external data sources. However, integrating and maintaining the retrieval component can increase complexity, which can add to the system’s overall latency. Also, the quality of the generated responses heavily depends on the relevance and quality of the data in the external source.

RAG pipelines and systems rely on an AI stack, also known as either a modern AI stack or Gen AI stack.

This refers to the composition of models, databases, libraries and frameworks used to build modern applications with generative AI capabilities. The infrastructure leverages parametric knowledge from the LLM and non-parametric knowledge from data to augment user queries.

Components of the AI Stack include the following: Models, orchestrators or Integrators and operational and vector databases. In this tutorial, MongoDB will act as the operational and vector database.

AI stack components
AI and POLM stack components. | Image: Richmond Alake

An error occurred.

Unable to execute JavaScript. Try watching this video on www.youtube.com, or enable JavaScript if it is disabled in your browser.
A tutorial on the basics of a retrieval augmented generation (RAG) system. | Video: IBM Technology

More on LLMsWhat Enterprises Need to Know Before Adopting a LLM

How to Create a RAG System

The tutorial outlines the steps for implementing the standard stages of a RAG pipeline. Specifically, it focuses on creating a chatbot-like system to respond to user inquiries about Airbnb listings by offering recommendations or general information.

1. Install Libraries

The following code snippet installs various libraries that provide functionalities to access large language models, reranking models, and database connection methods. These libraries abstract away some complexities associated with writing extensive code to achieve results that the imported libraries condense to just a few lines and method calls.

  • LlamaIndex: This is an LLM/data framework that provides functionalities to connect data sources — files, PDFs and websites, etc. — to both closed (OpenAI or Cohere) and open-source (Llama) large language models. The LlamaIndex framework abstracts complexities associated with data ingestion, RAG pipeline implementation and development of LLM applications.
  • PyMongo: A Python driver for MongoDB that enables functionalities to connect to a MongoDB database and query data stored as documented in the database using various methods provided by the library
  • Data sets: This is a Hugging Face library that provides access to a suite of data collections via specification of their path on the Hugging Face platform.
  • Pandas: A library that enables the creation of data structures that facilitate efficient data processing and modification in Python environments.
!pip install llama-index
!pip install llama-index-vector-stores-mongodb
!pip install llama-index-embeddings-openai
!pip install pymongo
!pip install datasets
!pip install pandas

2. Data Loading and OpenAI Key SetUp

The command below assigns an OpenAI API key to the environment variable OPENAI_API_KEY. This is required to ensure LlamaIndex creates an OpenAI client with the provided OpenAI API key to access features such as LLM models (GPT-3, GPT-3.5-turbo and GPT-4) and embedding models (text-embedding-ada-002, text-embedding-3-small, and text-embedding-3-large).

import os
os.environ["OPENAI_API_KEY"] = ""

The next step is to load the data within the development environment. The data for this tutorial is sourced from the Hugging Face platform, more specifically, the Airbnb data set made available via MongoDB. This data set comprises Airbnb listings, complete with property descriptions, reviews, and various metadata. 

Additionally, it features text embeddings for the property descriptions and image embeddings for the listing photos. The text embeddings have been generated using OpenAI’s text-embedding-3-small model, while the image embeddings are produced with OpenAI’s CLIP-ViT-B/32 model, both accessible on Hugging Face.

from datasets import load_dataset
import pandas as pd

# https://huggingface.co/datasets/MongoDB/embedded_movies
# Make sure you have an Hugging Face token(HF_TOKEN) in your development environemnt
dataset = load_dataset("MongoDB/airbnb_embeddings")

# Convert the dataset to a pandas dataframe
dataset_df = pd.DataFrame(dataset['train'])

dataset_df.head(5)

To fully demonstrate LlamaIndex’s capabilities and utilization, the ‘text-embeddings’ field must be removed from the original data set. This step enables the creation of a new embedding field tailored to the attributes specified by the LlamaIndex document configuration. 

The following code snippet effectively removes the ‘text-embeddings’ attribute from every data point in the data set.

dataset_df = dataset_df.drop(columns=['text_embeddings'])

3. LlamaIndex LLM Configuration

The code snippet below is designed to configure the foundational and embedding models necessary for the RAG pipeline. 

Within this setup, the base model selected for text generation is the ‘gpt-3.5-turbo’ model from OpenAI, which is the default choice within the LlamaIndex library. While the LlamaIndex OpenAIEmbedding class typically defaults to the ‘text-embedding-ada-002’ model for retrieval and embedding, this tutorial will switch to using the ‘text-embedding-3-small’ model, which features an embedding dimension of 256.

The embedding and base models are scoped globally using the Settings module from LlamaIndex. This means that downstream processes of the RAG pipeline do not need to specify the models utilized and, by default, will use the globally specified models.

from llama_index.core.settings import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding

embed_model = OpenAIEmbedding(model="text-embedding-3-small", dimensions=256)
llm = OpenAI()

Settings.llm = llm
Settings.embed_model = embed_model

4. Creating LlamaIndex custom Documents and Nodes

Next, we’ll create custom documents and nodes, which are considered first-class citizens within the LlamaIndex ecosystem. Documents are data structures that reference an object created from a data source, allowing for the specification of metadata and the behavior of data when provided to LLMs for text generation and embedding.

The code snippet provided below creates a list of documents, with specific attributes extracted from each data point in the data set. Additionally, the code snippet demonstrates how the original types of some attributes in the data set are converted into appropriate types recognized by LlamaIndex.

import json
from llama_index.core import Document
from llama_index.core.schema import MetadataMode

# Convert the DataFrame to a JSON string representation
documents_json = dataset_df.to_json(orient='records')

# Load the JSON string into a Python list of dictionaries
documents_list = json.loads(documents_json)

llama_documents = []

for document in documents_list:

  # Value for metadata must be one of (str, int, float, None)
  document["amenities"] = json.dumps(document["amenities"])
  document["images"] = json.dumps(document["images"])
  document["host"] = json.dumps(document["host"])
  document["address"] = json.dumps(document["address"])
  document["availability"] = json.dumps(document["availability"])
  document["review_scores"] = json.dumps(document["review_scores"])
  document["reviews"] = json.dumps(document["reviews"])
  document["image_embeddings"] = json.dumps(document["image_embeddings"])


  # Create a Document object with the text and excluded metadata for llm and embedding models
  llama_document = Document(
      text=document["description"],
      metadata=document,
      excluded_llm_metadata_keys=["_id", "transit", "minimum_nights", "maximum_nights", "cancellation_policy", "last_scraped", "calendar_last_scraped", "first_review", "last_review", "security_deposit", "cleaning_fee", "guests_included", "host", "availability", "reviews", "image_embeddings"],
      excluded_embed_metadata_keys=["_id", "transit", "minimum_nights", "maximum_nights", "cancellation_policy", "last_scraped", "calendar_last_scraped", "first_review", "last_review", "security_deposit", "cleaning_fee", "guests_included", "host", "availability", "reviews", "image_embeddings"],
      metadata_template="{key}=>{value}",
      text_template="Metadata: {metadata_str}n-----nContent: {content}",
      )

  llama_documents.append(llama_document)

# Observing an example of what the LLM and Embedding model receive as input
print(
    "nThe LLM sees this: n",
    llama_documents[0].get_content(metadata_mode=MetadataMode.LLM),
)
print(
    "nThe Embedding model sees this: n",
    llama_documents[0].get_content(metadata_mode=MetadataMode.EMBED),
)

The next step is to create nodes from the documents after creating a list of documents and specifying the metadata and LLM behavior. Nodes are the objects that are ingested into the MongoDB vector database and will enable the utilization of both vector data and operational data to conduct searches

from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.schema import MetadataMode

parser = SentenceSplitter(chunk_size=5000)
nodes = parser.get_nodes_from_documents(llama_documents)

for node in nodes:
    node_embedding = embed_model.get_text_embedding(
        node.get_content(metadata_mode=MetadataMode.EMBED)
    )
    node.embedding = node_embedding

5. MongoDB Vector Database Connection and Setup

MongoDB acts as both an operational and a vector database for the RAG system. MongoDB Atlas specifically provides a database solution that efficiently stores, queries and retrieves vector embeddings.

Creating a database and collection within MongoDB is made simple with MongoDB Atlas.

  1. First, register for a MongoDB Atlas account. For existing users, sign into MongoDB Atlas.
  2. Follow the instructions. Select Atlas UI as the procedure to deploy your first cluster. 
  3. Create the database: `airbnb`.
  4. Within the database` airbnb`, create the collection ‘listings_reviews’. 
  5. Create a vector search index named vector_index for the ‘listings_reviews’ collection. This index enables the RAG application to retrieve records as additional context to supplement user queries via vector search. Below is the JSON definition of the data collection vector search index. 
{
      "fields": [
        {
          "numDimensions": 256,
          "path": "embedding",
          "similarity": "cosine",
          "type": "vector"
        }
      ]
    }

Below is an explanation of each vector search index JSON definition field.

  • fields: This is a list that specifies the fields to be indexed in the MongoDB collection and defines the characteristics of the index itself.
  • numDimensions: Within each field item, numDimensions specifies the number of dimensions of the vector data. In this case, it’s set to 256. This number should match the dimensionality of the vector data stored in the field, and it is also one of the dimensions that OpenAI’s text-embedding-3-small creates vector embeddings.
  • path: The path field indicates the path to the data within the database documents to be indexed. Here, it’s set to embedding.
  • similarity: The similarity field defines the type of similarity distance metric that will be used to compare vectors during the search process. Here, it’s set to cosine, which measures the cosine of the angle between two vectors, effectively determining how similar or different these vectors are in their orientation in the vector space. Other similarity distance metric measures are Euclidean and dot products.
  • type: This field specifies the data type the index will handle. In this case, it is set to vector, indicating that this index is specifically designed for handling and optimizing searches over vector data.

By the end of this step, you should have a database with one collection and a defined vector search index. The final step of this section is to obtain the connection to the uniform resource identifier (URI) string to the created Atlas cluster to establish a connection between the databases and the current development environment. 

Don’t forget to whitelist the IP for the Python host or 0.0.0.0/0 for any IP when creating proof of concepts.

Follow MongoDB’s steps to get the connection string from the Atlas UI. After setting up the database and obtaining the Atlas cluster connection URI, securely store the URI within your development environment.

This guide uses Google Colab, which offers a feature for securely storing environment secrets. These secrets can then be accessed within the development environment. Specifically, the line mongo_uri = userdata.get('MONGO_URI') retrieves the URI from the secure storage.

The following steps utilize PyMongo to create a connection to the cluster and obtain reference objects to both the database and collection.

import pymongo
from google.colab import userdata

def get_mongo_client(mongo_uri):
  """Establish connection to the MongoDB."""
  try:
    client = pymongo.MongoClient(mongo_uri, appname="devrel.content.python")
    print("Connection to MongoDB successful")
    return client
  except pymongo.errors.ConnectionFailure as e:
    print(f"Connection failed: {e}")
    return None

mongo_uri = userdata.get('MONGO_URI')
if not mongo_uri:
  print("MONGO_URI not set in environment variables")

mongo_client = get_mongo_client(mongo_uri)

DB_NAME="airbnb"
COLLECTION_NAME="listings_reviews"

db = mongo_client[DB_NAME]
collection = db[COLLECTION_NAME]

The next step deletes any existing records in the collection to ensure the data ingestion destination is empty.

# Delete any existing records in the collection
collection.delete_many({})

6. Data Ingestion

Using LlamaIndex, vector store initialization and data ingestion is a trivial process that can be established in just two lines of code. The snippet below initializes a MongoDB atlas vector store object via the LlamaIndex constructor MongoDBAtlasVectorSearch. Using the ‘add()’ method of the vector store instance, the nodes are directly ingested into the MongoDB database.

It’s important to note that in this step, we reference the name of the vector search index that’s going to be created via the MongoDB Cloud Atlas interface. For this specific use case, the index name: “vector_index”.

Vector Store Initialization

From llama_index.vector_stores.mongodbimport MongoDBAtlasVectorSearch.

from llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch

vector_store = MongoDBAtlasVectorSearch(mongo_client, db_name=DB_NAME, collection_name=COLLECTION_NAME, index_name="vector_index")
vector_store.add(nodes)
  • Create an instance of a MongoDB Atlas Vector Store: vector_store using the MongoDBAtlasVectorSearch constructor.
  • db_name="airbnb": This specifies the name of the database within MongoDB Atlas where the documents, along with vector embeddings, are stored.
  • collection_name="listings_reviews": Specifies the name of the collection within the database where the documents are stored.
  • index_name="vector_index: This is a reference to the name of the vector search index created for the MongoDB collection.

Data Ingestion

  • vector_store.add(nodes): Ingests each node into the MongoDB vector database where each node represents a document entry.

7. Querying the Index With User Queries

To utilize the vector store capabilities with LlamaIndex, an index is initialized from the MongoDB vector store, as demonstrated in the code snippet below.

from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_vector_store(vector_store)

The next step involves creating a LlamaIndex query engine. The query engine enables the functionality to utilize natural language to retrieve relevant, contextually appropriate information from a vast index of data. LlamaIndex’s **as_query_engine** method abstracts the complexities of AI engineers and developers writing the implementation code to process queries appropriately for extracting information from a data source.

For our use case, the query engine satisfies the requirement of building a question-and-answer application, although LlamaIndex does provide the ability to construct a chat-like application with the Chat Engine functionality.

import pprint
from llama_index.core.response.notebook_utils import display_response

query_engine = index.as_query_engine(similarity_top_k=3)

query = "I want to stay in a place that's warm and friendly, and not too far from resturants, can you recommend a place? Include a reason as to why you've chosen your selection"

response = query_engine.query(query)
display_response(response)
pprint.pprint(response.source_nodes)

Initialization of Query Engine

  • The process starts with initializing a query engine from an existing index by calling index.as_query_engine(similarity_top_k=3).
  • This prepares the engine to sift through the indexed data to identify the top k (3 in this case) entries that are most similar to the content of the query.
  • For this scenario, the query posed is: “I want to stay in a place that’s warm and friendly and not too far from restaurants, can you recommend a place? Include a reason as to why you’ve chosen your selection.”
  • The query engine processes the input query to find and retrieve relevant responses.

Observing the Source Nodes

The LlamaIndex response object from the query engine provides additional insight into the data points contributing to the generated answer.

More on LLMsHow to Develop Large Language Model Applications

Understanding How to Build a RAG System

In this tutorial, we’ve developed a straightforward query system that leverages the RAG architectural design pattern for LLM applications, utilizing LlamaIndex as the LLM framework and MongoDB as the vector database. We also demonstrated how to customize metadata for consumption by databases and LLMs using LlamaIndex’s primary data structures, documents and nodes. This enables more metadata controllability when building LLM applications and data ingestion processes. Additionally, it shows how to use MongoDB as a Vector Database and carry out data ingestion. 

By focusing on a practical use case — a chatbot-like system for Airbnb listings — this guide walks you through the step-by-step process of implementing a RAG pipeline and highlights the distinct advantages of RAG over traditional fine-tuning methods. Through the POLM AI Stack — combining Python, OpenAI models, LlamaIndex for orchestration and MongoDB Atlas as the dual-purpose database, developers are provided with a comprehensive tool kit to innovate and deploy generative AI applications efficiently.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Axiros Launches New Release of their QoE Monitoring Solution – AXTRACT 3.5.0

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Monitoring and Management Of Customer QoE For Data And VoIP Services

MUNICHMarch 27, 2024PRLogAxiros, a leader in TR-069/TR-369, Unified Device Management, and zero-touch customer experience solutions, has announced the launch of version 3.5.0 of its Quality of Experience (QoE) Monitoring Solutions – AXTRACT.

The Axiros AXTRACT is a Quality of Experience (QoE) monitoring solution which is the process of assessing the overall quality of a customer’s experience with a brand or service. It involves collecting and analyzing data on various factors that can impact customer satisfaction, such as wait times, response times, unsuccessful attempts, error rates, and more.

QoE monitoring can help telco brands and ISP providers identify areas where customers are having a poor experience and make necessary changes to improve the quality of their service. In today’s competitive market, providing an excellent customer experience is essential for success. QoE monitoring can play a vital role in helping telcos and ISPs meet this goal.

Important highlights from this release are:

  • Core modules for large-scale USP deployments
  • AXTRACT and Ansible chroot using Debian 11 (bullseye)
  • Ships and supports MongoDB-5.0 (MongoDB-4.2 is still supported)
  • Ships and supports ClickHouse-2024-08-lts
  • Ships and supports Kafka-3.5.1
  • Ships and supports Grafana-10

“This release has a high focus on bringing the latest supported software to our customers to stay ahead in the security game. On top of that the added USP functionally brings users of AXTRACT into a position to monitor their millions of USP devices in a scalable manner,” said Stephan Bentheimer, the Lead of AXTRACT Development at Axiros.

For any technical questions about AXTRACT, please contact sales@axiros.com.

About Axiros (mailto:sales@axiros.com)

Any Device. Any Protocol. Any Service. Any Time | We manage all THINGS.

For over 20 years, Axiros has been a leading provider of providing software solutions and platforms (https://www.axiros.com/solutions) for Device Management in telecommunications and other industries. Axiros specializes in offering Device Management software that enables seamless integration and management of devices and services, leveraging industry standards like TR-069 and USP. With a global presence (https://www.axiros.com/office-locations), Axiros is committed to delivering innovative solutions that meet the evolving needs of businesses in today’s digital landscape. To learn more about AXIROS, visit www.axiros.com

Photos: (Click photo to enlarge)

Axiros GmbH Logo

Source: Axiros GmbH

Read Full Story – Axiros Launches New Release of their QoE Monitoring Solution – AXTRACT 3.5.0 | More news from this source

Press release distribution by PRLog

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Couchbase, Inc. (NASDAQ:BASE) SVP Huw Owen Sells 11,581 Shares – MarketBeat

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Couchbase, Inc. (NASDAQ:BASEGet Free Report) SVP Huw Owen sold 11,581 shares of the business’s stock in a transaction that occurred on Friday, March 22nd. The shares were sold at an average price of $26.79, for a total value of $310,254.99. Following the transaction, the senior vice president now directly owns 441,454 shares in the company, valued at approximately $11,826,552.66. The transaction was disclosed in a legal filing with the SEC, which is accessible through this hyperlink.

Huw Owen also recently made the following trade(s):

  • On Thursday, January 25th, Huw Owen sold 1,376 shares of Couchbase stock. The shares were sold at an average price of $25.00, for a total value of $34,400.00.
  • On Tuesday, January 23rd, Huw Owen sold 3,500 shares of Couchbase stock. The shares were sold at an average price of $25.00, for a total value of $87,500.00.
  • On Tuesday, January 2nd, Huw Owen sold 40,604 shares of Couchbase stock. The shares were sold at an average price of $21.38, for a total value of $868,113.52.

Couchbase Trading Down 2.6 %

Shares of BASE traded down $0.69 during trading hours on Tuesday, hitting $25.98. The company had a trading volume of 273,569 shares, compared to its average volume of 511,737. The company has a market capitalization of $1.25 billion, a PE ratio of -15.28 and a beta of 0.72. Couchbase, Inc. has a one year low of $13.28 and a one year high of $32.00. The company’s fifty day moving average price is $26.80 and its two-hundred day moving average price is $21.65.

Institutional Inflows and Outflows

Several hedge funds have recently added to or reduced their stakes in BASE. Swiss National Bank increased its position in Couchbase by 7.9% during the 1st quarter. Swiss National Bank now owns 25,800 shares of the company’s stock valued at $449,000 after buying an additional 1,900 shares in the last quarter. JPMorgan Chase & Co. increased its position in Couchbase by 698.1% during the 1st quarter. JPMorgan Chase & Co. now owns 46,673 shares of the company’s stock valued at $813,000 after buying an additional 40,825 shares in the last quarter. Bank of New York Mellon Corp increased its position in Couchbase by 130.7% during the 1st quarter. Bank of New York Mellon Corp now owns 64,384 shares of the company’s stock valued at $1,122,000 after buying an additional 36,474 shares in the last quarter. MetLife Investment Management LLC acquired a new position in Couchbase during the 1st quarter valued at about $255,000. Finally, Metropolitan Life Insurance Co NY acquired a new position in Couchbase during the 1st quarter valued at about $30,000. 96.07% of the stock is owned by institutional investors and hedge funds.

Wall Street Analysts Forecast Growth

A number of equities analysts recently issued reports on BASE shares. Barclays boosted their price objective on Couchbase from $29.00 to $33.00 and gave the company an “equal weight” rating in a report on Wednesday, March 6th. Guggenheim upped their target price on Couchbase from $27.00 to $32.00 and gave the stock a “buy” rating in a report on Wednesday, March 6th. DA Davidson upped their target price on Couchbase from $27.00 to $35.00 and gave the stock a “buy” rating in a report on Wednesday, March 6th. Royal Bank of Canada upped their target price on Couchbase from $32.00 to $35.00 and gave the stock an “outperform” rating in a report on Wednesday, March 6th. Finally, Morgan Stanley upped their target price on Couchbase from $20.00 to $21.00 and gave the stock an “equal weight” rating in a report on Thursday, December 7th. Three analysts have rated the stock with a hold rating and eight have assigned a buy rating to the company’s stock. Based on data from MarketBeat.com, the company has an average rating of “Moderate Buy” and an average target price of $32.40.

Get Our Latest Analysis on BASE

About Couchbase

(Get Free Report)

Couchbase, Inc provides a database for enterprise applications in the United States and internationally. Its database works in multiple configurations, ranging from cloud to multi- or hybrid-cloud to on-premise environments to the edge. The company offers Couchbase Capella, an automated and secure Database-as-a-Service that helps in database management by deploying, managing, and operating Couchbase Server across cloud environments; and Couchbase Server, a multi-service NoSQL database, which provides SQL-compatible query language and SQL++ that allows for a various array of data manipulation functions.

See Also

Insider Buying and Selling by Quarter for Couchbase (NASDAQ:BASE)

Before you consider Couchbase, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and Couchbase wasn’t on the list.

While Couchbase currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Metaverse Stocks And Why You Can't Ignore Them Cover

Thinking about investing in Meta, Roblox, or Unity? Click the link to learn what streetwise investors need to know about the metaverse and public markets before making an investment.

Get This Free Report

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Vitess Version 19 Released: Ends Support for MySQL 5.7, Improves MySQL Compatibility

MMS Founder
MMS Aditya Kulkarni

Article originally posted on InfoQ. Visit InfoQ

Recently, Vitess launched its latest stable release v19. The highlights of this update include metrics for monitoring stream consolidations, improved query compatibility with MySQL for multi-table delete operations, support for incremental backups, and various performance enhancements, among other features.

The Vitess Maintainer Team discussed the release in a blog post which was also shared on the CNCF website. Vitess provides a database management solution designed for the deployment, scaling, and administration of large clusters of open-source database instances. At present, it provides support for MySQL and Percona Server for MySQL.

In response to Oracle marking MySQL 5.7 as end-of-life in October 2023, Vitess is aligning with these updates by discontinuing support for MySQL 5.7 in this latest release. The maintainers’ team has recommended users upgrade their systems to MySQL 8.0 while utilizing Vitess 18 before transitioning to Vitess 19. It is important to note that Vitess 19 will maintain compatibility for importing from MySQL 5.7.

The improved query compatibility with MySQL is facilitated through the SHOW VSCHEMA KEYSPACES query, along with various other SQL syntax enhancements, including the ability to perform the AVG() aggregation function on sharded keyspaces through a combination of SUM and COUNT. Furthermore, Vitess 19 broadens its support for Non-recursive Common Table Expressions (CTEs), enabling the creation of more complex queries.

To mitigate potential security vulnerabilities, communication between throttlers has been transitioned to use gRPC, discontinuing support for HTTP communication. Vitess has also introduced VSchema improvements by incorporating a --strict sub-flag and a matching gRPC field within the ApplyVSchema command. This update guarantees using only recognized parameters in Vindexes, thus improving error detection and configuration verification.

Additionally, in a move to enhance security and ensure greater system stability, the ExecuteFetchAsDBA command now rejects multi-statement inputs. Vitess plans to offer formal support for multi-statement operations in an upcoming version.

The process of Vitess migration cut-over has been updated to incorporate a back-off strategy when encountering table locks. If the initial cut-over fails, subsequent attempts will occur at progressively longer intervals, minimizing strain on an already burdened production system.

Additionally, Online DDL now offers the option of a forced cut-over, which can be triggered either after a specified timeout or on demand. This approach gives precedence to completing the cut-over, by ending queries and transactions that interfere with the cut-over process.

When it comes to general features, Vitess is equipped with JDBC and Go database drivers that comply with a native query protocol. Furthermore, it supports the MySQL server protocol, ensuring compatibility with almost all other programming languages.

Numerous companies, including Slack and GitHub, have implemented Vitess to meet their production requirements. Additionally, Vitess has managed all database traffic for YouTube for more than five years. Vitess Maintainer Team has invited the tech community to join discussions on GitHub or Slack channel, where they can share stories, pose questions, and engage with the broader Vitess community.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Copilot in Azure SQL Database in Private Preview

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has announced a private preview of Copilot for SQL Azure, which offers a natural language for SQL conversion and self-help for database administration.

Azure SQL is Microsoft’s cloud-based database service that offers a broad range of SQL database features, scalability, and security to support applications’ data storage and management needs. The Copilot for Azure SQL Database introduces two features in the Azure portal:

  • Natural Language to SQL: This feature converts natural language queries into SQL within the Azure portal’s query editor for Azure SQL Database, simplifying database interactions. The query editor utilizes table and view names, column names, primary key, and foreign key metadata to generate T-SQL code, which the user can then review and execute the code suggestion.

           

        Generate Query capability in the Azure portal query editor  (Source: Microsoft Learn)

  • Azure Copilot Integration: By incorporating Azure SQL Database capabilities into Microsoft Copilot for Azure, this feature offers users self-service guidance and enables them to manage their databases and address issues independently. Users can ask and receive helpful, context-rich Azure SQL Database suggestions from Microsoft Copilot for Azure.

           

           Sample prompt for database performance (Source: Microsoft Learn)

Joe Sack, product management Azure SQL at Microsoft, writes:

Copilot in Azure SQL Database integrates data and formulates applicable responses using public documentation, database schema, dynamic management views, catalog views, and Azure supportability diagnostics.

In addition, regarding the Query Editor, Edward Dortland, a managing director at Twintos, commented on a LinkedIn post on Copilot for SQL Azure:

Finally, I don’t have to re-read Itzik Ben-Gan’s book whenever somebody asks me to make an update on that one stored procedure that somebody before me, made 10 years ago and is all about compounding interest calculations, on combined derivatives in a banking database. Welcome Copilot for Azure query editor, I waited so long for this moment.

Azure SQL is not the only service that has Microsoft Azure Copilot integration. This Copilot supports various services and functions across Azure, enhancing productivity, reducing costs, and providing deep insights by leveraging large language models (LLMs), the Azure control plane, and insights about the user’s Azure environment. Moreover, it is not limited to Azure alone as it extends its capabilities across the Microsoft ecosystem, such as Dynamics 365, Service Management, Power Apps, and Power Automate.

Vishal Anand, a global chief technologist at IBM, concluded in a Medium blog post on Microsoft Azure Copilot:

Era has arrived where Human worker and Digital worker can collaborate in real-time to add value to cloud operations. It is currently not recommended for production workloads (obviously it is in preview phase).

The private preview of Copilot for SQL Azure is available via a sign-up page, and more information is available on the FAQ page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Streamlining Cloud Development with Deno

MMS Founder
MMS Ryan Dahl

Article originally posted on InfoQ. Visit InfoQ

Transcript

Dahl: I’m going to just give you a whirlwind tour of Deno and some subset of the features that were built into it. These days I’m working on Deno. The web is incredibly important. Year after year, it just becomes more important. This wasn’t super clear in 2020. There was a time where maybe iPhone apps were going to replace the web. It wasn’t super obvious. I think now in 2023, it’s pretty clear that the web is here to stay, and, indeed, is the medium of human information. This is how you read your newspaper. This is how you interact with your bank. This is how you talk to your friends. This is how humans communicate with each other. It is incredibly important. We build so much infrastructure on the web at this point, and the web being a very large set of different technologies. I think it is a very fair bet to say that the web is still going to be here 5 years from now, if not 10 or 20 years from now. That is maybe unsurprising to you all, but technology is very difficult to predict. There are very few situations where you can say 5 years from now, this technology is going to be here. I think this is a rare exception. We are pretty sure that JavaScript, a central protocol of the web central programming language, it will be here in the future because it’s inherently tied to the web. I think that further work in JavaScript is necessary. I think the tools that I and others developed 10-plus years ago with Node.js, and npm, and the whole ecosystem around that, are dating fairly poorly over the years. I think further investment here is necessary.

The Node Project

With Node, my goal was to really force developers to have an easy way to build fast servers. The core idea in Node, which was relatively new at the time was, to force people to use async I/O. You had to do non-blocking sockets, as opposed to threading. With JavaScript, this was all bundled up and packaged in a nice way that was easily understandable to frontend developers because they were used to non-blocking JavaScript in websites. They were used to handling buttons with on-click callbacks. Very similar to how servers are handled in Node, when you get a new connection, you get an on-connection callback. I think it’s clear these days that more can be done beyond just doing async I/O. I think this is pretty table stakes. Yet, developing servers is still hard. It’s still something that we’re stumbling across. We have a larger perspective these days on what it means to build an optimal server. It’s not just something that runs on your laptop, it’s something that needs to run across the entire world. Being able to configure that and manage that is a pretty complex situation. I’m sure you all have dealt with complex cloud configurations in some form or another. I’m sure many of you have dealt with Terraform. I am sure you have thought about how to get good latency worldwide. Having a single instance of your application server in U.S.-East is not appropriate for all applications. Sometimes you need to serve users locally, say in Tokyo, and you are bounded by speed of light problems. You have to be executing code close to where users reside.

There is just an insane amount of software and workflows and tool chains, especially in the JavaScript space where with Node, we created this small core and let a broad ecosystem develop around it. I think not an unreasonable theory, and certainly was pretty successful, but because of this wide range of tooling options in front of you when you sit down to program some server-side JavaScript, it takes a lot to understand what’s going on. Should you be using ESLint, or TSLint, or Prettier? There are all these unknown questions that make it so that you have to be an expert in the ecosystem in order to proceed correctly. Those are things that can be solved at the platform layer. Of course, I think we’ve all heard about the npm supply chain security problems where people can just publish any sort of code to the registry. There are all sorts of nasty effects where as soon as you link a dependency, you are potentially executing untrusted post install scripts on your local device, without you ever agreeing to that, without really understanding that. That is a very bad situation. We cannot be executing untrusted code from the internet. This is a very bad problem.

Deno: Next-Gen JavaScript Runtime

In some sense, Deno is a continuation of the Node project. It is pursuing the same goals, but just with an expanded perspective on what this means on what optimal servers actually entail. Deno is a next-generation JavaScript runtime. It is secure by default. It is, I say JavaScript runtime, because at the bottom layer, it is JavaScript. It treats TypeScript and JSX natively, so it really does understand TypeScript in particular, and understands the types, and can do type checking, and just handles that natively so that you are not left with trying to configure that on top of the platform. It has all sorts of tooling built into it, testing, linting, formatting, just to name a small subset. It is backwards compatible with Node and npm. It’s trying to thread this needle between keeping true to web standards, and really keeping the gap between browsers and server-side JavaScript as minimal as possible. Also being compatible with existing npm modules, existing Node software. That’s a difficult thing to do because the gap between the npm ecosystem, the Node ecosystem, and where web browser standards are specified is quite large.

Demo (Deno)

In Deno, you can run URLs directly from the command line. I’ll just type it up, https://deno.land/std@0.150.0/examples/gist.ts. Actually, let me curl this URL first, just to give you an idea. This is some source code. This is a little program that posts GISTs to GitHub. It takes a file as a parameter and uploads it. I’m going to run this file directly in a secure way with Deno, so I’m just going to Deno run that command line. What you’ll see is that, first of all, it doesn’t just execute this thing. You immediately hit this permission prompt situation where it says, Deno is trying to access this GIST token. I have an environment variable that allows me to post to my GitHub. Do you want to allow access or deny access to this? I will say yes. Then it fails because I haven’t actually provided a file name here. What I’m going to do is upload my etc password file, very secure. I will allow it to read the environment variable. Then you see that it’s trying to access the file etc password, I will allow that. Now it’s trying to post to api.github.com. I will allow that. Finally, it has actually uploaded the GIST, and luckily, my etc password file is shadowed here and doesn’t actually contain any secret information so it’s not such a big deal.

Just as an example of running programs securely, obviously, if it’s going to be accessing your operating system, and reading GIST tokens and whatnot, you need to opt into that. There is no secure way for a program to read environment variables and access the internet in some unbounded way. By restricting this and at least being able to observe and click through these things and agree, very much like in the web browser, when you access a website, and it’s like, I want to access your location API or access your webcam, in Deno you opt into operating system access. That is what is meant by secure by default and what is meant by running URLs directly. It’s not just URLs it can run, it can also execute npm scripts. You can do deno run npm:cowsay, is a little npm package that prints out some ASCII art. I’m just going to try to have it say cowsay hello. For some reason, the npm package cowsay wants access to my current working directory, so we’ll allow that. It’s also trying to read its own package JSON file, it’s a little bit weird. It’s also trying to read a .cow file, which I assume is the ASCII art. Once I grant that access, it can indeed print out this stuff. Of course, you can bypass all of these permission checks with allow read, which just allows you to read files from your disk without writing them, without getting network access, without getting environment variables. In this way you can execute things with some degree of confidence.

Deno is a single executable, so I will just demo that. It’s a relatively large executable, but are you ever going to know that if I didn’t tell you that? It’s 100 megabytes, but it contains quite a lot of functionality. It is pretty tight. It doesn’t link to too many system libraries. This is what we’ve got it linked to. It is this one file that you need to do all of your Typescript and JavaScript functionality. We do ship on Mac, Linux, and Windows all fully supported. I think it’s a nice property that this executable is all you need. You only need that one file. When you install Deno, you literally install a single executable file. I had mentioned that Deno takes web standards seriously. This is a subset of some of the APIs that Deno supports. Of course, globalThis, is this annoying name for the global scope that TC39 invented. That is the web standard. WebAssembly, of course. Web Crypto is supported. Web Workers are supported, that’s how you do parallel computation in Deno, just like in the browser. TransformStream, EventTarget, AbortController, the location object, FormData, Request and Response. Window variable, window.close, localStorage. It goes quite deep.

Just to demo this a little bit, I want to gzip a file with web standards here. Let me, deno init qcon10. I’m using 10 because I’ve been doing this for hours now. I think I’ve got a lot of QCon directories here. Let me open up VS Code and potentially make that big enough that you’re able to view this. What I’ve run is a deno init script that creates a few files here. What I can do is deno run main.ts, and that adds two numbers together. No big deal. I’m going to delete this stuff. What I want to do is actually open up two files. I’ll open up etc password. Then I’m going to open up a file, out.gz. I’m going to compress that text file into a gzip file, but using web streams here, so Deno.open etc/passwd is the first one. That’s an async operation, and I’m going to await it, top level await here. This is the source file that we’re reading from. Then I’m going to open up a destination file Deno.open out.gz, in the current directory, and we want this to be, write is true, and create is true. We’ll let Copilot do a lot of the work there. What we can do is this SRC file has a property on it, readable, which is actually a readable stream. I can do pipeThrough, again web standard APIs, new CompressionStream for gzip. The problem with Copilot is you can’t really trust it. Then we’re going to pipeThrough this gzip CompressionStream, and then pipe to the destination that’s writeable. This is a very web standard way of gzipping a file. Let’s try to run that guy. Of course, it’s trying to access etc password, so we have to allow that. Yes, it’s trying to get write access to this out.gz file. There. Hopefully, we’ve created an out.gz file, and hopefully that file is slightly less than this password file, which that appears to be the case. Web standard APIs are very deep. In particular, when it comes to streaming APIs and web servers, Deno has it all very deeply covered. If you want to stream a response and pipe it through something else, all of this is dealt with, with standard APIs.

I mentioned that Deno has Node compatibility. This was not an original goal of Deno. Deno set out with blazing new trails. What we found over time is that, actually, there is a lot of code that depends on npm and Node, and that people are unable to operate in this world effectively without being able to link to npm packages, and npm packages, of course, are built on top of Node built-in APIs. We actually have this all pretty much implemented at this point. There are, of course, many places where this falls through. Just as a demo of this, we’re going to import the Node file system, so import from node:fs, and I’ll do readFileSync. Before I was using the Deno.apis, some global object that provides all of the Deno APIs. Here, I’m just going to do readFileSync etc, I’ll use my same file here. This thing returns a Node buffer, not to be confused with a web standard buffer. Very different things. I’ll just console.log.out just as a demo of running a Node project. Of course, the security still applies here. Even though you’re going through the built-in Node APIs, you do not actually have access to the file system without allowing that, so you have to grant that either through the permission prompt or through flags. There we go. We’ve read etc password now, and outputted a buffer. This fs API is rather superficial, but this goes pretty deep. It’s not 100% complete, but it is deep.

The Best Way to Build npm Packages: DNT

I do want to mention this project called DNT. DNT stands for Deno Node Transform. One of the big problems with npm packages is that you need to provide different transpilation targets. Are you building an ESM module? Are you building a common JS module? What engines are you targeting? It’s pretty hard to code that up properly by hand, there’s a lot of things to get wrong. At this point, we are thinking of treating npm packages as essentially compilation targets rather than things that people write by hand themselves, because it is so complex and so easy to get wrong. DNT takes care of this for you. It will output a proper npm module with common JS support and ESM support, and creates tests for you, and does this all in a cross-platform way that can be tested on Node, and can polyfill any things that are not in Node, can provide you with toolings there. I just wanted to advertise this, even if you’re not using Deno, this is a great tooling to actually distribute npm packages.

Express Demo

As a slightly more serious example, let’s do some Express coding here. In Deno, you don’t need to set up this package JSON file, you can just import things. You can use a package JSON file, but it’s boilerplate at some point. Let me do this, import express from npm:express, a slightly different import here. Then let me see here. I’m going to const app = express, and then app.listen is something, and console.log listening on http://localhost:3000. Just throw together my little Express app here. Let’s see if this thing is running. For some reason, it wants to access the CWD. I think this must be something weird with Express, also environment variables. Also, of course, it wants to listen on port 3000. I think once we’ve jumped through those hoops, we should be able to curl localhost at port 3000, and get Hello World. In order to not have to repeat that all the time, let me just add some command line flags here, –allow-read –allow-env –allow-net, not allow write. There is a couple of red squigglies here, because I’m using TypeScript, because this is transparent TypeScript support here, is giving me some errors, and saying, this response has an any type. The problem with Express because it’s a dated package is that it doesn’t actually distribute TypeScript types itself. We have this admittedly somewhat nasty little pragma that you need to do here, where you have to link this import statement with npm:types here. Modern npm packages will actually distribute types, and link them in the package JSON. This shouldn’t be necessary. Just because Express is so old, you have to give a hint to the compiler about how to do this.

We’re still getting one little squiggly here. Now we have our proper little Express server. In fact, let me just make this a little bit better. We’ve got this –watch command built into Deno, so let me just add my flags here to this deno task. Deno task is something like npm scripts, so I can just do deno task dev, and now it’s going to reload the server every time I change any files here. If I want to change that from Hello World, to just Hello, it will reload that. We’ve got our little Express server.

We’ll keep working with that example, but just as an interlude, here’s some of the tooling built into Deno. The one that I wanted to point out here is standalone executables. Let’s take this little Express server that we’ve built, and let’s try to deno compile main.ts. What is this going to do? This is going to take all of the dependencies, including all of the Express and npm dependencies, package it up into a single executable file, and output it so that we can distribute this single file that is this tiny little Express server. I’m going to do this. Looks like it worked. There’s my qcon10 executable. Now it’s still prompting me for permission. Actually, what I want to do is provide a little bit more –allow-net –allow-env –allow-read, provide these permissions to the compiled output so that we can actually run this without flags. Now we’ve got this little distributable qcon10 file that is an ARM64 Macintosh executable here. You can package this up and send it around, it is hermetic. It will persist into the future. It doesn’t depend on any external things. It is not downloading packages from npm. It includes all of the dependencies there. What’s super cool about this, though, is that we can do this trick with target. Target allows you to cross-compile for different platforms. Let’s say that we wanted to actually output qcon10.exe for distribution on Windows. We’ll provide a target and we’ll do qcon10.exe. There we go, we’ve got our, hopefully, qcon10.exe. We’ve got our exe file that should be executable on MS Windows that contains all of these dependencies, just like the previous one. Similarly, you can target Linux, of course. This is extremely convenient for distributing code. You do not want to distribute a tarball with all of these dependencies. You want to email somebody one file and say execute this thing.

Deno Deploy: The Easiest Serverless Platform

Deno is a company. We are also building a commercial product. All of this stuff that I’ve been showing you, is all open source. Of course, MIT licensed, very free stuff. We take that infrastructure, combine it with some proprietary stuff, and we are building a serverless platform. This is the easiest serverless platform you have ever seen. It is quite robust. It is, first and foremost, serverless, and so scales to zero is a quite important goal for anything that we’re developing inside Deno Deploy. It, of course, supports npm packages, and Deno software in general. It has built-in storage and compute, which is really interesting when you start getting into this. Those also have serverless aspects to them. It is low latency everywhere. If you have a user in Japan, when you deploy to Deno Deploy, they will get served locally in Japan, rather than having to make a round the world hop to your application server. It is focused on JavaScript. Because of that, we can deliver very fast cold start times. We are just starting up a little isolate. This is not a general-purpose serverless system like lambda. I started this talk by saying that JavaScript has a future unlike all other technologies. Maybe semiconductor technology is going to be here 5 years from now. In these kinds of high-level technologies, JavaScript is pretty unique in that we can say that it is going to be there in the future. That is why we feel comfortable building a system specifically for JavaScript. This is a platform unlike say Python. It does have different things. This system is production ready and serving real traffic. For example, Netlify Edge Functions is built on top of Deno Deploy. This is actually supporting quite a lot of load.

Let’s try to take our Express server and deploy it to Deno Deploy and see what that’s like. What I’m going to do is go to the website, dash.deno.com. This is what Deno Deploy looks like. I’m going to create a new project. I’m going to create a new blank project here. It’s given me a name, hungry-boar-45. For good memory, we’ll call it qcon10. Now I have to deploy it. I can go look up the command for this, but I know what it is, deployctl. What I’m going to do is just say, give it the project name, deployctl deploy, and then I have to give it the main file that it’s going to run. I’ll give this main.ts. Before I execute this, let me just delete these executables that I outputted here, so it doesn’t inadvertently upload those. Let me also just give the prod flag. This is uploading my JavaScript code to Deno Deploy, and making a deployment. Before you know it, you’ve got this subdomain, which we can curl and gives me a Hello response. This is maybe seemingly quite trivial, but this is deployed worldwide in the time that we run this command. When you access this subdomain, and if you access this 2 years from now, hopefully this will continue to be persisted. This is running in a serverless way. Costs us essentially nothing to run a site that gets no traffic.

Deno KV: Simple Data Storage for JavaScript

To take this example maybe one step further, I mentioned that this has some state involved here. What we’re doing right now is just responding with Hello. Let’s make this slightly more realistic. Obviously, real websites are going to be accessing databases and whatnot. You can, of course, connect to your Postgres database. You can, of course, connect to MongoDB. What we’ve added in Deno Deploy is this Deno KV, which is a very simple serverless database that is geared towards application state, specifically in JavaScript. It is very critically zero configuration to set up. It will just be there out of the box. It does have ACID transactions, so you can use this for real consistent state. It scales to zero, of course. It’s built into Deno Deploy. Under the hood, it’s FoundationDB, which is a very robust industrial database that is running, for example, Snowflake, and iCloud at Apple, a very serious key-value database that provides all of these primitives. We, of course, are not engineering a database from scratch. That would be absolute insanity. What we are trying to do is provide a very simple state mechanism that you can use in JavaScript, for very simple states that you might need to persist. You would be surprised at how useful this actually is. It’s a key-value database, and it operates on JavaScript objects. It’s important to note that the keys are not strings, but JavaScript arrays. You can think of it as like a directory structure, a hierarchical directory. Those keys can be strings, or the values in those arrays can be strings, or integers, or actually byte arrays. The values can be any JavaScript object, not just a JSON serializable JavaScript object. You can dump date objects in there. You can dump BigInts. You can dump Uint8Arrays. It’s very transparently nice.

Let’s try to add a little bit of state to the system. What I’m going to do is first of all open the database, so Deno.openKv is the access to this, and you have to await that. This is how you access it, and you do const kv equals that. I’m getting a little red squiggly line around this, because it’s like, what is this openKv thing? This stuff is still in beta and behind the unstable flag. I do need to go into my settings and enable the unstable APIs in Deno. I just click this thing, and it should go away. What I want to do is just set a real simple value here. As I said, the keys are arrays. I’m going to just start a key that’s called counter, which is an array with a single string in it. I’m going to set this to the value 0. I have to await this. Then I’m going to get this counter and get back 0, so const v, and I’ll just console.log that. Let’s run this file again and see if I haven’t really messed this up, so deno task dev. It’s giving me another warning here, openKv is not a function, again, this is because I don’t have the unstable flag. Let me just add unstable in here and try this again. There we go. Now when I run this file, at the top level, I’ve stored a key and gotten a value back out of it. That’s interesting. Let me just give a name to this key here, and delete this code. Actually, let me save some of this code.

Let me get this counter out of here and return it in the response of this Express server here. I’m going to await this kv.get, and then I’m going to return Hello, maybe I’ll say counter, with the value of that counter. I’ve got a little red squiggly here, and it’s saying that await expressions can only be allowed inside async functions. Got to turn this into an async. That should make it happy. Let’s just test it, curl localhost. I’m getting counter is zero. Then the question is like, how do I increment this counter? We’re going to deploy this to Deno Deploy, where it’s going to be propagated all around the world. How are we going to increment that counter? Let me show you. You can do atomic transactions. I mentioned ACID transactions in KV. You can do kv.atomic sum, we’re going to increment this counter key that we’ve got. We’ll increment it by 1. Let’s see if that works. We’ll curl this, and it is not working. We have to commit the transaction, so curl. It has crashed, because this is a non-U64 value in the database. Let me use key there, and let’s just use a different value here. Counter 1, counter 2, counter 3, so we are now incrementing this value from the database. My claim here is that, although this is just a counter, can now be deployed to this serverless environment. I’ll just run the same command that I did before, this deploy CTL thing. We’ll do deployctl deploy prod project equals qcon10 main.ts.

My claim here is that this is, although, a very simple server still, somewhat stateful, so every time I’m accessing this qcon, the counter is increasing quite rapidly. This thing is deployed worldwide. If I go into my project page up here on Deno Deploy, you can get this KV tab, and inside of this you can see some details about this. The right region for the KV values is in U.S.-East. There are read replicas for this. Actually, the only one that’s turned on right now is U.S.-East-4, but we’ve got two others available. We could turn on the Los Angeles read replica. In fact, let’s just do that, it takes a second. This will replicate the data to Los Angeles so that when we read from that value, we are getting something. I’ll leave it at that. It’s suffice to say that this is pretty useful in many situations where you need to share state between different isolates running in different regions. I don’t think it takes the place of a real relational database. If you have a user’s table and that sort of stuff, you probably want to manage that with migrations. There are many situations in particular, like session storage where something like this becomes very valuable.

Deno Queues: Simple At Least Once Delivery

We also have KV queues. This is a simple way of doing at least once delivery. This is, in particular, very useful for webhooks, where you don’t want to do a bunch of computation inside the webhook. What you want to do is queue up something and do that asynchronous from the webhook request. I’ll just leave the link, https://deno.com/blog/queues.

Questions and Answers

Participant 1: Can I get your spicy take on the class undone?

Dahl: JavaScript is super important. There is a lot of room for improvement here. Deno started off, as I mentioned, not Node compatible, and we are operating in a world, pushing out what is possible beyond npm. I think the Bun project has created some good competition in that sense. It’s just like, no, actually, this is very important for stuff. I think that pushes our work in Deno to be receptive to the needs of npm users. We can’t just elbow our way into a developer community and say that you need to use HTTPS imports for everything. Very good in that sense. I really like Rust. I’m very confident that a very heavily typed, very complex system like what we are building, is manageable quite well in Rust. We continue to develop with urgency, and we will see where this goes in the future.

Participant 2: I’m curious if you’ve thought about the psychology of the permission system in terms of users or developers will just say allow all. How do you deal with that decision fatigue?

Dahl: Of course, when you’re developing yourself on some software, you’re just going to do allow all, because you know what you’re running. The JavaScript V8 engine provides a very secure sandbox. I think it’s a real loss if you just throw that away and say like, no, you have unrestricted access to the operating system, bar nothing. It gets in your way, but I think that’s good. Sure, maybe it introduces decision fatigue and maybe people do allow all. That’s their problem. Better to have the choice than to not. I think there’s work that we can do in the ergonomics of it to make it a little bit more usable and user friendly. Generally, we are pretty confident with our secure by default design.

Participant 3: The slide where you were talking about supporting web standard APIs, that makes a ton of sense. I read between the lines as that being a goal to allow people to use the same code in the browser, but they were even better than in some cases, which seems like a noble goal. Then your import statements are a little bit different, so they wouldn’t work in a browser. I’m curious like how those pieces fit together.

Dahl: We are touching the file system. We are creating servers. We’re doing all sorts of stuff that is not going to run in the web browser. We’re writing in TypeScript in particular, that is not going to run in the web browser. Just because there is something that you’re doing differently on server-side JavaScript, doesn’t mean we need to reinvent the world. I think a big problem in the Node world, for example, has been the HTTP request API. When I made that API there was no fetch. I just invented it. Let’s import HTTP request. That was fine until fetch got invented. Then there was this huge gap for many years. These days Node is fixing things, I think in the next release actually fetch is going to be very well supported. I think, just because you can’t run TypeScript in the web browser, or you don’t have npm imports, doesn’t mean that we should just throw off all compatibility. Think of it as a Venn diagram. We want that Venn diagram to be as close as possible. Yes, of course, it’s a server-side JavaScript system, it is not going to do exactly the same things as browsers. Let’s not just make it two distinct, separate systems. That creates a lot of confusion for users.

Participant 4: I’m wondering if the mKV would be part of the open source implementation, or is that a commercial product?

Dahl: You saw me running it locally. The question is like, what is it doing there? Locally, it’s actually using a SQLite database, hidden behind the scenes. The purpose of that is for testing so that you can develop your KV applications and check the semantics of it, run your unit tests, without having to set up some mock KV server. That is all open source and fine. You’re free to take that and do what you want. Obviously, you can ignore it if you want to. The FoundationDB access comes when you deploy to Deno Deploy. There are ways to connect, for example, Node to the hosted FoundationDB, even if you’re not running in Deno Deploy, that’s called KV connect. It is a hosted solution, and it is a proprietary commercial operation. First of all, you’re free to ignore it. You can run it locally. The SQLite version of it is open source.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.