15 AI News Shaping Wall Street Today – Insider Monkey

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Artificial intelligence is the greatest investment opportunity of our lifetime. The time to invest in groundbreaking AI is now, and this stock is a steal!

The whispers are turning into roars.

Artificial intelligence isn’t science fiction anymore.

It’s the revolution reshaping every industry on the planet.

From driverless cars to medical breakthroughs, AI is on the cusp of a global explosion, and savvy investors stand to reap the rewards.

Here’s why this is the prime moment to jump on the AI bandwagon:

Exponential Growth on the Horizon: Forget linear growth – AI is poised for a hockey stick trajectory.

Imagine every sector, from healthcare to finance, infused with superhuman intelligence.

We’re talking disease prediction, hyper-personalized marketing, and automated logistics that streamline everything.

This isn’t a maybe – it’s an inevitability.

Early investors will be the ones positioned to ride the wave of this technological tsunami.

Ground Floor Opportunity: Remember the early days of the internet?

Those who saw the potential of tech giants back then are sitting pretty today.

AI is at a similar inflection point.

We’re not talking about established players – we’re talking about nimble startups with groundbreaking ideas and the potential to become the next Google or Amazon.

This is your chance to get in before the rockets take off!

Disruption is the New Name of the Game: Let’s face it, complacency breeds stagnation.

AI is the ultimate disruptor, and it’s shaking the foundations of traditional industries.

The companies that embrace AI will thrive, while the dinosaurs clinging to outdated methods will be left in the dust.

As an investor, you want to be on the side of the winners, and AI is the winning ticket.

The Talent Pool is Overflowing: The world’s brightest minds are flocking to AI.

From computer scientists to mathematicians, the next generation of innovators is pouring its energy into this field.

This influx of talent guarantees a constant stream of groundbreaking ideas and rapid advancements.

By investing in AI, you’re essentially backing the future.

The future is powered by artificial intelligence, and the time to invest is NOW.

Don’t be a spectator in this technological revolution.

Dive into the AI gold rush and watch your portfolio soar alongside the brightest minds of our generation.

This isn’t just about making money – it’s about being part of the future.

So, buckle up and get ready for the ride of your investment life!

Act Now and Unlock a Potential 10,000% Return: This AI Stock is a Diamond in the Rough (But Our Help is Key!)

The AI revolution is upon us, and savvy investors stand to make a fortune.

But with so many choices, how do you find the hidden gem – the company poised for explosive growth?

That’s where our expertise comes in.

We’ve got the answer, but there’s a twist…

Imagine an AI company so groundbreaking, so far ahead of the curve, that even if its stock price quadrupled today, it would still be considered ridiculously cheap.

That’s the potential you’re looking at. This isn’t just about a decent return – we’re talking about a 10,000% gain over the next decade!

Our research team has identified a hidden gem – an AI company with cutting-edge technology, massive potential, and a current stock price that screams opportunity.

This company boasts the most advanced technology in the AI sector, putting them leagues ahead of competitors.

It’s like having a race car on a go-kart track.

They have a strong possibility of cornering entire markets, becoming the undisputed leader in their field.

Here’s the catch (it’s a good one): To uncover this sleeping giant, you’ll need our exclusive intel.

We want to make sure none of our valued readers miss out on this groundbreaking opportunity!

That’s why we’re slashing the price of our Premium Readership Newsletter by a whopping 70%.

For a ridiculously low price of just $29, you can unlock a year’s worth of in-depth investment research and exclusive insights – that’s less than a single restaurant meal!

Here’s why this is a deal you can’t afford to pass up:

• Access to our Detailed Report on this Game-Changing AI Stock: Our in-depth report dives deep into our #1 AI stock’s groundbreaking technology and massive growth potential.

• 11 New Issues of Our Premium Readership Newsletter: You will also receive 11 new issues and at least one new stock pick per month from our monthly newsletter’s portfolio over the next 12 months. These stocks are handpicked by our research director, Dr. Inan Dogan.

• One free upcoming issue of our 70+ page Quarterly Newsletter: A value of $149

• Bonus Reports: Premium access to members-only fund manager video interviews

• Ad-Free Browsing: Enjoy a year of investment research free from distracting banner and pop-up ads, allowing you to focus on uncovering the next big opportunity.

• 30-Day Money-Back Guarantee:  If you’re not absolutely satisfied with our service, we’ll provide a full refund within 30 days, no questions asked.

Space is Limited! Only 1000 spots are available for this exclusive offer. Don’t let this chance slip away – subscribe to our Premium Readership Newsletter today and unlock the potential for a life-changing investment.

Here’s what to do next:

1. Head over to our website and subscribe to our Premium Readership Newsletter for just $29.

2. Enjoy a year of ad-free browsing, exclusive access to our in-depth report on the revolutionary AI company, and the upcoming issues of our Premium Readership Newsletter over the next 12 months.

3. Sit back, relax, and know that you’re backed by our ironclad 30-day money-back guarantee.

Don’t miss out on this incredible opportunity! Subscribe now and take control of your AI investment future!

No worries about auto-renewals! Our 30-Day Money-Back Guarantee applies whether you’re joining us for the first time or renewing your subscription a year later!

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Nosql Software Market to Surge at 17.06% CAGR Through 2032 | – openPR.com

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Nosql Software Market

Nosql Software Market

➤ Nosql Software Market Overview:

The Nosql Software Market Industry is expected to grow from 11.43(USD Billion) in 2024 to 40.3 (USD Billion) by 2032. The Nosql Software Market CAGR (growth rate) is expected to be around 17.06% during the forecast period (2024 – 2032). The NoSQL software market is witnessing exponential growth as businesses seek scalable and flexible database solutions to manage unstructured data. NoSQL databases, unlike traditional relational databases, offer enhanced performance, scalability, and adaptability, making them ideal for cloud applications, big data analytics, and IoT. Their ability to handle diverse data formats has been a major driver for industries such as e-commerce, healthcare, and finance.

Browse a Full Report (Including Full TOC, List of Tables & Figures, Chart) –
https://www.wiseguyreports.com/reports/nosql-software-market

Adoption of NoSQL solutions is driven by the increasing demand for real-time data processing and analytics. Enterprises are prioritizing solutions that can accommodate vast volumes of data without compromising speed. The market is also fueled by the proliferation of social media, video streaming, and e-commerce platforms, which generate large datasets. These factors collectively underscore the critical role of NoSQL software in contemporary digital transformation efforts.

➤ Market Segmentation:

The NoSQL software market is broadly segmented by database type, application, and industry vertical. By database type, it includes document-based, key-value, column-family, and graph databases, each serving unique use cases. Applications span customer relationship management, content management, and web applications. The software is tailored to meet the diverse needs of small and medium enterprises (SMEs) as well as large corporations.

In terms of industry verticals, the technology sector leads adoption, followed by retail, healthcare, and financial services. The flexibility of NoSQL databases to support multi-model architectures has enhanced their appeal across industries. Furthermore, the rise of edge computing and AI-driven applications is expanding the scope of NoSQL solutions in niche domains like logistics and smart cities.

Get a sample PDF of the report at –
https://www.wiseguyreports.com/sample-request?id=593078

➤ Market Key Players:

Key players in the NoSQL software market include,
• Hazelcast
• Couchbase
• IBM
• Amazon Web Services (AWS)
• Redis Labs
• Microsoft
• ScyllaDB
• Neo4j
• MarkLogic
• Oracle

MongoDB remains a dominant force with its Atlas platform, providing a cloud-native NoSQL database solution. AWS DynamoDB is another major player, known for its seamless integration with AWS services and highly scalable infrastructure.

Redis Labs and Couchbase are expanding their offerings with advanced capabilities like in-memory processing and distributed architectures. These companies invest heavily in R&D to stay competitive and cater to evolving customer demands. Strategic partnerships and acquisitions have also been pivotal in shaping the competitive landscape of the NoSQL software market.

➤ Recent Developments:

The NoSQL market has witnessed notable developments, with companies launching innovative solutions to address specific industry challenges. MongoDB recently introduced features for generative AI applications, enabling businesses to build smarter applications. AWS DynamoDB continues to enhance its serverless capabilities, ensuring developers can manage large workloads efficiently.

Partnerships between NoSQL vendors and cloud service providers are increasing, driving integrated solutions that streamline data management. Additionally, the open-source community plays a significant role in the market, with frameworks like Apache Cassandra and Neo4j gaining traction. These advancements reflect the dynamic nature of the NoSQL market, adapting to meet modern demands.

➤ Market Dynamics:

The NoSQL software market is shaped by key drivers such as the growing need for big data analytics, the shift towards cloud computing, and rising adoption of microservices architecture. Enterprises increasingly prefer NoSQL databases for their ability to support distributed systems and offer high availability. Furthermore, the rise of IoT and AI applications is propelling demand for NoSQL solutions capable of handling massive datasets.

However, challenges such as a lack of skilled professionals and data security concerns persist. Vendors are addressing these issues by offering user-friendly platforms and robust encryption features. As enterprises continue to modernize their IT infrastructure, the demand for NoSQL databases is expected to grow, bolstering market expansion.

➤ Regional Analysis:

North America dominates the NoSQL software market, owing to its advanced technological infrastructure and high adoption of digital transformation strategies. The region’s strong presence of key players and emphasis on cloud-based solutions further solidify its leadership position. The United States, in particular, drives growth with significant investments in big data and AI technologies.

In the Asia-Pacific region, rapid urbanization, digitalization, and the proliferation of startups contribute to increasing demand for NoSQL solutions. Countries like China and India are emerging as key markets due to their growing IT sectors. Europe and the Middle East also present opportunities as organizations in these regions transition to modern data management systems.

➤ Top Trending Reports:

• Zirconium Fluoride Optical Fiber Market –
https://www.wiseguyreports.com/reports/zirconium-fluoride-optical-fiber-market

• Self Organizing Network Son Testing Solutions Market –
https://www.wiseguyreports.com/reports/self-organizing-network-son-testing-solutions-market

• Broadband Access Service Market –
https://www.wiseguyreports.com/reports/broadband-access-service-market

• Poe Optical Fiber Switch Market –
https://www.wiseguyreports.com/reports/poe-optical-fiber-switch-market

• Mobile User Objective Systems Market –
https://www.wiseguyreports.com/reports/mobile-user-objective-systems-market

• Circuit Switched Fallback Csfb Technology Market –
https://www.wiseguyreports.com/reports/circuit-switched-fallback-csfb-technology-market

• Massive Machine Type Communication Mmtc Market –
https://www.wiseguyreports.com/reports/massive-machine-type-communication-mmtc-market

• Serial To Fiber Optic Converters Market –
https://www.wiseguyreports.com/reports/serial-to-fiber-optic-converters-market

• Unified Communications As A Service Ucaas In Healthcare Market –
https://www.wiseguyreports.com/reports/unified-communications-as-a-service-ucaas-in-healthcare-market

• Nebs Server Market –
https://www.wiseguyreports.com/reports/nebs-server-market

About US:

Wise Guy Reports is pleased to introduce itself as a leading provider of insightful market research solutions that adapt to the ever-changing demands of businesses around the globe. By offering comprehensive market intelligence, our company enables corporate organizations to make informed choices, drive growth, and stay ahead in competitive markets.

We have a team of experts who blend industry knowledge and cutting-edge research methodologies to provide excellent insights across various sectors. Whether exploring new market opportunities, appraising consumer behavior, or evaluating competitive landscapes, we offer bespoke research solutions for your specific objectives.

At Wise Guy Reports, accuracy, reliability, and timeliness are our main priorities when preparing our deliverables. We want our clients to have information that can be used to act upon their strategic initiatives. We, therefore, aim to be your trustworthy partner within dynamic business settings through excellence and innovation.

Contact:

WISEGUY RESEARCH CONSULTANTS PVT LTD
Office No. 528, Amanora Chambers Pune – 411028
Maharashtra, India 411028
Sales: +91 20 6912 2998

This release was published on openPR.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft & MongoDB expand partnership for AI solutions – IT Brief Australia

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Microsoft and MongoDB have enhanced their collaboration to provide improved AI applications through a significant expansion of their partnership.

The extended partnership introduces three new key capabilities aimed at enabling joint customers to enhance AI application development. The capabilities focus on enhancing large language models (LLMs) with proprietary data, generating real-time business insights, and offering tailored deployment solutions using MongoDB.

One of the enhancements allows customers to use their own proprietary data stored in MongoDB Atlas to improve AI model performance and accuracy. This aims to facilitate the creation of more intelligent and customised AI applications by leveraging data unique to each business.

Additionally, a new synchronisation feature between MongoDB Atlas and Microsoft Fabric enables the extraction of near real-time business insights. This new capability promises quicker analysis and decision-making through real-time analytics, AI-based predictions, and business intelligence reports.

On the deployment front, MongoDB Enterprise Advanced (EA) can now be deployed across various environments, including on-premises, hybrid, and multi-cloud. This is made possible by its certification as an Azure Arc-enabled Kubernetes application, which provides customers with increased flexibility and control over data infrastructure.

In support of this partnership enhancement, Alan Chhabra, Executive Vice President of Partners at MongoDB, stated, “We frequently hear from MongoDB’s customers and partners that they’re looking for the best way to build AI applications, using the latest models and tools. And to address varying business needs, they also want to be able to use multiple tools for data analytics and business insights. Now, with the MongoDB Atlas integration with Azure AI Foundry, customers can power gen AI applications with their own data stored in MongoDB. And with Open Mirroring in Microsoft Fabric, customers can seamlessly sync data between MongoDB Atlas and OneLake for efficient data analysis. Combining the best from Microsoft with the best from MongoDB will help developers push applications even further.”

Trimble, a prominent provider of construction technology, is among the early testers of these integrations. Dan Farner, Vice President of Product Development at Trimble, commented, “As an early tester of the new integrations, Trimble views MongoDB Atlas as a premier choice for our data and vector storage. Building RAG architectures for our customers require powerful tools and these workflows need to enable the storage and querying of large collections of data and AI models in near real-time. We’re excited to continue to build on MongoDB and look forward to taking advantage of its integrations with Microsoft to accelerate our ML offerings across the construction space.”

Eliassen Group, a strategic consulting firm, also expressed positive expectations regarding the expanded collaboration. Kolby Kappes, Vice President – Emerging Technology at Eliassen Group, said, “We’ve witnessed the incredible impact MongoDB Atlas has had on our customers’ businesses, and we’ve been equally impressed by Microsoft Azure AI Foundry’s capabilities. Now that these powerful platforms are integrated, we’re excited to combine the best of both worlds to build AI solutions that our customers will love just as much as we do.”

Available in 48 Azure regions worldwide, MongoDB Atlas offers joint customers access to powerful document data model capabilities. This integration aims to accelerate and simplify the way developers build applications using structured and unstructured data.

Sandy Gupta, Vice President of Partner Development ISV at Microsoft, remarked, “By integrating MongoDB Atlas with Microsoft Azure’s powerful AI and data analytics tools, we empower our customers to build modern AI applications with unparalleled flexibility and efficiency. This collaboration ensures seamless data synchronization, real-time analytics, and robust application development across multi-cloud and hybrid environments.”

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: From Local to Production: A Modern Developer’s Journey Towards Kubernetes

MMS Founder
MMS Urvashi Mohnani

Article originally posted on InfoQ. Visit InfoQ

Transcript

Mohnani: My name is Urvashi Mohnani. I’m a Principal Software Engineer on the OpenShift container tools team at Red Hat. I have been in the container space for about 7 years now. I’m here to talk to you about a developer’s journey towards Kubernetes.Let’s do a quick refresher on what containers are. Containers are software packages that bundle up code and all of its dependencies together so that the application can run in any computing environment. They’re lightweight and portable, making them easy to scale and share across the various environments. When run, containers are just normal Linux processes with an additional layer of isolation and security, as well as resource management from the kernel.

Security comes in the form of configuring which and how many permissions your container has access to. Resources such as CPU and RAM can be constrained using cgroups. The isolation environment can be set up by tweaking which namespaces the process is added to. The different categories of namespaces you have, user namespaces, network namespaces, PID namespaces, and so forth. It really just depends on how isolated you want your container environment to be. How do we create a container? The first thing we need is a containerfile, or a Dockerfile. You can think of this as the recipe of what exactly goes inside your container.

In this file, you will define the dependencies and any content that your application need to run. We can then build this containerfile to create a container image. The container image is a snapshot of everything that was in the recipe. Each line in the recipe is added as a new layer on top of the previous layers. At the end of the day, we compress all these layers together to create a tarball. When we run this container image, that’s when we get a container.Since containers are just Linux processes, they have always existed. You just had to be a Linux guru to be able to set up all the security, isolation, and cgroups around it. Docker was the first container tool to make this more accessible to the end user by creating a command line interface that does all the nitty-gritty setup for you, and all you have to do is give it a simple command to create your container.

Since then, many more container tools have been created in the open-source world, and they target different areas of the container space. We have a few listed on the slide here. We have Buildah that focuses on building your container images. Skopeo focusing on managing your container images. Podman, that is a tool for not only running your containers, but also develop and creating pods. There is CRI-O, which is a lightweight daemon that is optimized for running your workloads with Kubernetes. Kubernetes itself, which is a container orchestration platform that allows you to manage your thousands of containers in production. Together, all these various container tools give you a holistic solution, depending on what area you really need to focus on in the container space. For this talk, I’m going to use Podman, which is an open-source project, to highlight how we can make a developer’s journey from local to prod, seamless. A few things I would like to mention is that, Podman is open source and completely free to use. It is daemonless, focuses on security first, and is compatible with all OCI compliant container images and registries.

Towards Kubernetes

You’ve been running your containers locally, how’d you get to production? There are a few key challenges in going there. Some of them are paranoid sysadmins, different technologies and environment, and a different skill set as well. We call this the wall of discrepancies. Security doesn’t match up. You have low or no security in your local dev environment while production has highly tightened security. Container processes have different permissions available to them. Locally, you have root privileges available, while in production rootless is required. In fact, even the way you define your container is different between the two environments. All of this just adds a lot of overhead for the developer and can definitely be avoided. Let’s take a look at how we can target some of these. When you run a container locally with a tool like Podman, you can use a bunch of commands and flags to set up your container. I have an example here where I’m running a simple Python frontend container and I want to expose the port that’s inside it.

To do that, I have used a publish flag so that when I go to localhost, port 8088, I’m able to access the server that’s running inside that. Another way that you can define or you can run containers locally is using a Docker Compose file. This is a form of YAML that the Docker Compose tool understands. Here’s an example of how you would define that. Let’s say you have your container running locally. You want to now test it out in a Kubernetes environment. Wouldn’t it be great if you could just copy paste either the CLI command that you have there, or the Docker Compose and just paste in the cluster? Unfortunately, you cannot do that. For those of us here who have run in Kubernetes before, know that Kubernetes has its own YAML format to define your container and your workloads. As you can see, there are three formats going on around here, so when you want to translate from your local dev to a Kubernetes environment, you have to translate between these formats. That can be tedious and can also be error prone, as some configurations could be lost in the translation, just because fields that are named one way in the Kubernetes YAML format are not exactly the same in the flags that are used in the CLI.

You really have to keep referring back to documentation to figure how they map around. This is where the podman kube command can help. In an effort to make the transition from Podman to Kubernetes and vice versa, easy on the developer, Podman can automatically generate a Kube YAML for you when you pass it a container ID or a pod ID that’s running inside Podman. At this point, you can literally do a copy and paste of that Kube YAML file, put it in your cluster and get running. Of course, users can further tweak that generated Kube YAML for any specific Kubernetes use cases or anything that they want to update later on.

I mentioned vice versa, so you can go from Podman to Kubernetes, but you can also go from Kubernetes to Podman with one command. Let’s say you have an application running in your Kubernetes production cluster. Something goes wrong with it, and you really want to debug it. You have some issues getting the right permissions, or access to try and figure it out on the production cluster itself, and you wish you could just replicate that workload locally here. Good news for you is that you can do that with the podman kube play command. You just have to go into your cluster, grab the Kube YAML of that workload, pass it to podman kube play, and Podman will go through all the container definitions, pod definitions, any resources defined in that, and create it on your local system, and start it for you. Now you have the same workload running locally, and you have all the permissions and access you need to be able to debug and test it, just with two commands, podman kube generate and podman kube play.

Outside of Kubernetes, Podman also works really well with systemd. You can use Podman and systemd to manage your workload using systemd unit files. This is especially useful in Edge environments where running a Kubernetes cluster is just not possible due to resource constraints. Edge devices are also a form of production environment, where you’re running your applications there. As we can see here, when you want to do that with systemd, systemd has its own different format. In addition to the three that we just spoke about, there’s a fourth one that you probably have to translate your workloads to if you want to move them to Edge environments. In the effort of standardizing all of this and making it easy for the developer, Quadlet was added to Podman. What Quadlet does is that it’s able to take a Kube YAML file, convert it to a systemd unit file under the hood, and start those containers with Podman and systemd for you, so the user doesn’t have to do anything. All you need is that one Kube YAML file that defines your container workload, and you can plug it into Podman, into a Kubernetes cluster, and into an Edge device using systemd.

Rootless First

That was on the container definition. Remember I mentioned that Podman focuses on security first. This can actually be seen in its architecture. Podman uses a fork-exec model. What this means is that when a container is created using Podman, it is a child of itself. This means that root privileges are not required to run. If someone is trying to do something weird on the machine, when you take a look at the audit logs, you can actually trace it back to exactly which user was trying to do what. When you compare that to the Docker daemon, root access is required, although now you can set up rootless context. If someone is trying to do something weird, when you take a look at the audit logs, it points to this random UID, which essentially is the Docker daemon, but it doesn’t tell you which user was trying to do what there. In the rootless first scenario, there are two things to keep in mind. When you run your container, you want to run it as rootless on the host, which is default when you run with Podman.

You also want your container application that’s running inside the container to be run as a rootless user. This is something that is often overlooked, because just running rootless on the host is considered enough, and container engines, by default, give you root privileges inside the container when you just start it up. This is something that developers usually don’t keep in mind. When you are running in a production Kubernetes based cluster, that is focused on security, so something like OpenShift, running rootless inside the container is a hard requirement. Keeping this in mind and practicing it while you’re doing your development will save you a lot of headaches when you then eventually translate from your local development to a production cluster. In the rootless first scenario, you want to run rootless inside and outside of the container.

Security

Continuing with the security theme, when you use Kubernetes out of the box, it provides you with three pod security standards. These are privileged. Here, your container process has basically all the permissions and privileges possible. You definitely do not want to be using this when you’re running in production. Second one is baseline. Here, your process has some restrictions, but not enough restrictions where you’re banging your head on the wall trying to get your container working. It’s secure, but it’s also broad enough to give you the ability to run your containers without issues. The third one is restricted. This is the one that’s heavily restricted. You basically have zero or very little permissions. This is probably the one you want to use when you’re running in production, but it’s often the most difficult to get started with. We always advise that you start with baseline, like middle line, get there somewhat, and then continue on tightening the security. Let’s take a deeper dive into security. There are two key aspects to it. The first one is SELinux. SELinux protects the host file system by using a labeling process to allow or deny processes from accessing any resources on the system. In fact, any file system CVEs that have happened in the past have been mitigated if you had SELinux enabled on your host.

To take advantage of this, you need to have SELinux enabled both on the host and in the container engine. Podman and CRI-O are SELinux enabled by default, while other container engines are not. If you’re running a Kubernetes cluster using CRI-O, you will have SELinux enabled by default. If your host also has SELinux enabled, then your file system is protected by the SELinux security policies. Always setenforce 1. The second one is capabilities. You can think of capabilities as small chunks of permissions that you give your container process. The more capabilities your container has, the more privileges it has. On the right, this is the list of capabilities that Podman enables by default. It’s a pretty small list. It has been tightened down enough that you are secure, and also, you’re able to still run your containers without running into any security-based issues. When we compare this with the list allowed by the baseline pod security standard given by Kubernetes, they have the same list and actually have a few more capabilities that you can enable as well. When you run in production, you probably want to have even fewer capabilities enabled so that you can shrink your attack surface even further.

To reiterate on the two themes over here, one is that Podman makes it easy for you to transition between your local environment and your pod environment by giving you the ability to translate your CLI commands to Kube YAMLs, or by just being able to understand a Kube YAML and being able to plug that into Podman, Kubernetes, and systemd. The second one is, Podman’s focus on security first helps you replicate an environment that is secure, or quite secure to match what you would expect in a production environment.

Obviously, it’s not going to get you 100% there, but it can at least get you 50% there, so when you do eventually transition over, you run into fewer frictions and have already targeted some of the main security aspects that come when you move to production. With Podman, you can run your containers. You can also run pods, which gives you an idea of what it’s like to run in Kubernetes, because Kubernetes is all pod based. You can run your Docker Compose YAML file with one command. You can convert it to Kube YAML, and deploy those to Kubernetes clusters like kind, minikube, OpenShift, vanilla Kubernetes itself. All of these capabilities and tools are actually neatly put together in a desktop application called Podman Desktop that runs on any operating system. It works on Mac. It works on Linux. It works on Windows. In fact, I’m using a Mac, and I will show you that.

Demo

This is what the desktop app looks like. I’m running on a Mac. I’m on an M2 Mac right now. It gives you information on what Podman version we are running right now, and just some other resources. On the left, we have tabs to take a look at any containers that we have, any pods, images. I’ve already pulled down a bunch of images. You can see the volumes. You can see any extensions that are available. Podman has a lot of extensions available to help you interact with different tools. You can set up a kind cluster, or you can set up a minikube cluster. You can talk to the Docker daemon, if you would like to do that. You can set up Lima as well. There’s a bunch of extensions that you can enable to take advantage of these different tools. For the demo, I am going to start a simple Python application that’s running a web server. This is just the code for it. I have already built and pre-pulled my images down because that takes a while.

If you would like to build an image, you can do that by clicking this button over here, build, and you can browse to the file where your containerfile is stored. In Podman, you can select the architecture you want to build for, and it will build it up for you. Since I already have it built, I’m just going to go ahead and run this container. I have my Python application as a web server that also has a Redis database that I need for the application. You’ll see why once I start it. First, I’m just going to click on this to start it up, give it a name, let’s call it Redis. I’m going to configure its network so that my Python frontend can actually talk to it once I start that. My container is up and running. When it starts, there are all these different tabs that you can take a look at. The logs obviously show you the logs of the container. We can inspect the container, so this gives you all the data about the container that’s running, any information you may or may not need.

By default, we create the Kube YAML for you as well. If you want to just directly run this in a Kubernetes cluster, you can just copy paste this, and deploy it there. With the terminal, you can also get dropped into a shell in the container and play around with it there. Now when I go back to my containers view, I can see that I have the Redis container running. Now let’s go ahead and start the frontend. Let’s give it the name, python frontend. I need to expose a port in this one so I can access it. I’m going to expose it to port 8088. I’m going to go back here and add it to the same network that I had added the Redis database to. Let’s start that. That’s up and running. Similar thing here, you can see the logs. You can inspect the container. You can see the Kube YAML. It can also be dropped into a terminal over here. Same thing. When I go back to my containers, now I see I have two containers running. This is running locally on my Mac right now. Since I’ve exposed the port 8088, let’s go to a browser window and try and talk to that port. There you go. That’s the application that I was running. Every time someone visits this page, the counter will go up by 1, and that is what the Redis database is needed for to store this information. That was me running my container locally.

Let’s say that I want to put this in a pod to replicate how it would run when I run it in Kubernetes, but I still want to run it locally on my machine using Podman. Very simple to do. Go ahead and select this. You can put one, or probably as many containers as you would like in a pod. I’ve not tested the limit on that, but if you do find it out, you can do that. Then I’ll click on that create pod button that showed up there. Click on create pod here. What it will do is now it will create my pod with these two containers inside it. You can update the name of the pod to whatever you would like it to be. I have just left it as my pod. Here we can see I have three containers running inside it, one is the infra container. Then I have the Redis and the Python frontend containers.

Yes, when I click on that, I can actually see the containers running inside it. Same thing with the pod here, when you go you can see the logs in there. I can see the logs for both the containers. You can inspect the container, and you can also get the Kube YAML for the whole pod with both the containers inside. When I go back to containers here, we can see that the first two containers that I had started have been stopped in favor of this new pod with these containers inside it. It’s still exposed at port 8088, so let’s go ahead and refresh. As you can see, the counter started back by 1 because a new container was created, but every time I refresh, it’s going to go up. I successfully run my container and I podified it. That’s what we call it. This is all local on Podman.

Now I have this pod. Let’s say that I want to now actually deploy it in the Kubernetes cluster, but I’m not ready to deploy it in a prod remote Kubernetes cluster yet. I want to test it out still locally using something like kind or minikube. As I mentioned earlier, Podman has those extensions. If you go to resources, you can actually set those up with the click of a button. I have already set up minikube on my laptop right now. We can, in fact, see the minikube container running inside Podman over here. If I go to my terminal and I do minikube status, you can see that my miniKube cluster is up and running. Podman also has this tree icon over here where you can see the status of Podman machine and get to the dashboard. It has this Kubernetes context thing as well. In the kubeconfig file that’s on your laptop, you can sign into multiple different Kubernetes clusters, as long as you have the credentials for it. It can see the context of those different clusters available to you, and you can switch between them.

You can decide which one you want to deploy to, which one you want to access, which one you want to see which pods are running in. Right now, I want to do it on minikube, which is running locally on my computer. That’s what I have selected. Now all I do is I go to this icon over here, I click on deploy to Kubernetes. It will generate the Kube YAML file for me. You can tell it which namespace you want to deploy it into. I just wanted a default namespace, and I’ll click on deploy. When we do that, we can see that pod was created successfully and the containers are running successfully. When we go to my terminal and we do kubectl get pods, we can see my pod is up and running over there. We can also actually see this on the Podman Desktop interface, when we go to pods.

Podman Desktop is able to see which pods and deployments are running in the Kubernetes cluster you’re currently pointing at, and it will tell you that this is the pod. You can see that the environment is set to Kubernetes, so you know it’s the Kubernetes cluster and not your local Podman. Now, same thing here. Let’s get the service so we can see my-pod-8088 services there. I want to expose this so I can actually access the web server running inside it. I’m just going to do minikube service on that, and run that. There you go. It opened a browser for me with that new container and minikube cluster. Every time I refresh, the counter goes up by 1. I was able to, with a click of a button, deploy my container that I had running locally on Podman into a minikube cluster.

What’s the next step? Pretty obvious. You want to deploy it remotely into a cluster that’s probably a production cluster, or a cluster that you test on right before you send it out to production. The really easy way of doing that is basically the same steps again. I’m going to go over here and switch out my context to point to a remote OpenShift cluster that I have running on AWS right now. I’m going to click that. When we do that, we’ll see that you no longer see the pod that’s running in minikube, because now it’s pointing to my OpenShift cluster. I can just go ahead here and do the same thing, deploy to Kubernetes. It would have been deployed on the OpenShift cluster. You would have just switched the context and it would have done the same thing it did with minikube, and launched it over there. It was pretty cool, since we would expose the port that we had the application running on, it was running in an AWS environment.

This was just demoing, moving from local to prod. I did all of this using the Podman Desktop UI. If you’re someone like me who really prefers to use the terminal and type instead of clicking on a bunch of buttons, all of this can be done using the Podman command line interface as well. You can do podman images, it will show you a list of all your images. You can do podman ps, it will show you a list of all your containers running. You can do podman pod ps, and it will show you your pods running. I mentioned that you can also go from prod back to local or to Podman. You can also do that by going back to the Podman Desktop app and clicking on this play Kubernetes YAML button over here. You can browse and point it to the Kube YAML that you wanted to play. You can select whether you wanted to run with Podman or run in the Kubernetes cluster that you’re currently pointing at. That’s something you can do. I’m not going to do that from here. I want to show you how it works with the command line, so I’ll do it from there. This is basically the Kube YAML that I’m going to play.

Very simple. It’s there. It’s a very simple nginx server that I have defined over here. I’m going to go back into my terminal, and let’s do podman kube play. I’m going to set the publish-all flag, just because I want to expose the port that I have defined in there, and pass it the kube.yaml. There you go, the pod was created. When we do podman pod ps, we can see the nginx pod was created. When we do podman ps, we can see the nginx container was also created over here. We can see that it’s exposed at localhost port 80. We can go back to our browser and we can go to localhost 80, and nginx server is up and running. With the Kube YAML file, I was able to just do podman kube play, and get that running locally with Podman. That is basically the demo I had for you, that highlighted that path of moving from Podman to Kubernetes, Kubernetes back, and all the different stuff that you can do with the different ways you can test, play, and deploy eventually to your production cluster.

Podman Desktop

Podman, you can use to run, build container images. You can run your containers and pods. It integrates really well with Kubernetes. As we saw, it has all those features to be able to easily deploy to Kubernetes and pull it back from there. It has a concept of pods to help you replicate what a Kubernetes environment would look like when you do run your workloads in Kubernetes after containerizing them. You can do image builds, set up the registries you would like to pull images from, load images for testing, and all of that. With the click of a few buttons, you can set up a kind cluster locally with Podman, a minikube cluster locally, and can connect to various Kubernetes contexts. One thing I’d like to highlight again is the extensions that Podman supports. We have support for kind, minikube, Docker daemon, OpenShift local, Lima, and many more. It’s just a way of giving all of these tools to the developers so that they can play around with it and have access to everything and anything they might need when developing their containerized workloads.

K8s and Beyond

I know this talk focuses on Kubernetes, but there’s a lot more the developer might need, and there are a bunch of cool features that have been added recently to Podman and Podman Desktop. One thing is AI Lab. AI is really big right now. We’re all super excited about it, and so is Podman Desktop. They added a new extension called AI Lab, where you can run your AI models locally, so that you can then create your container applications, using that as an inference point, basically. The next one is Bootc, where you can create and build bootable container images. The idea here is that, in future, your operating systems will be installed and upgraded using container images. I think it’s pretty cool. It’s still pretty much under development, but you have the ability to start playing around with that right now.

The final one is farm build, which is actually a feature I worked on personally, where you can build multi-arch images from one system. Given that Silicon Macs are so popular nowadays, having the ability to have different architecture images is very important now. In fact, I actually used this command when I was creating the demo for this talk, because my Mac is an M1 architecture, so I was doing all of that with Podman Desktop on my Mac. If OpenShift AWS had worked, that was on an x86 architecture, so I would have needed that architecture image for that part of the demo. If you’re excited by all of this, one of my colleagues has put together a bunch of demos and info on all of this. You can find that at this link.

AI Lab

I can show you the AI Lab extension in Podman Desktop, just because I think it’s very cool. Back to the Podman Desktop, I’ve already enabled it. I just click on AI Lab over here, and it gives me this view. I can take a look at the recipes catalog. These are some things that it gives you out of the box. You can set up a chatbot, or summarize code generation. I’m going to set up a quick chatbot. I’ll click on Start AI app. What it does, it checks out the repository, it pulls down the model. I chose to pull down the granite model, but there are different bunch of models you can pull down from InstructLab. It sets up the llamacpp-server and Streamlit chat app. When I go to this running tab, I can see that app is up and running, and I can just go to the interface that they provide me by default. Let’s ask it a question.

Let’s ask it, what is InfoQ Dev Summit? It’s going to do its thinking and give us an answer. I’m just using the interface that they gave me, but while you’re developing your applications, you can also just connect to it for your use case. I haven’t really given it much resources right now to run. That’s why it’s pretty slow. The more powerful machine you have, the better your performance would be. I think it gave us a pretty accurate answer on what InfoQ Dev Summit is. With the click of a few buttons, I have a personal chatbot running on my machine with Podman Desktop. Then there’s also the Bootc extension over here. This helps you create those bootable OS container images. You click on this, it gives you the ability to be able to switch between disk images and all of that. That’s something you can also play around with.

Get Started with Podman and Podman Desktop

Podman is open source, completely free to use. Same for Podman Desktop. There’s a pretty big community around it, discussions, PRs, issues, contributions, everything are welcome. You can check out our podman.io document page to get started.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to Mask Data for Testing in MongoDB – Security Boulevard

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Masking data for safe, compliant use in testing environments is not as straightforward as it seems, especially when using schemaless, unstructured databases like MongoDB. Personally identifiable information (PII) can be scattered throughout your document-based data in ways that are hard to predict—so hard that it simply isn’t a challenge most teams offering data de-identification solutions are willing to take on. (Spoiler alert: Tonic.ai isn’t “most teams.”)

Let’s explore how to mask data for testing in MongoDB, plus what makes this such a challenging nut to crack. Then, we’ll wrap up with a quick demonstration of how to mask MongoDB data using Tonic.

MongoDB’s Masked Data Challenges

MongoDB is a NoSQL database system that stores data as documents. Its document database system operates without structure or schema, and each copy can contain numerous types of data as the data level progresses.

.ai-rotate {position: relative;}
.ai-rotate-hidden {visibility: hidden;}
.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}

MongoDB’s flexible storage system enables efficiency for scaling apps because it stores large amounts of data within clusters defined in millions of nodes. However, this document-based storage system presents significant challenges when de-identifying and masking data.

Challenge 1: Unstructured data

The first challenge is its unstructured nature. Since MongoDB is schemaless, each field in a data collection can represent any one of the various data types. Furthermore, this type can change with each document level. A field may exist as an integer in one document level and a string in another. This lack of consistency presents an obstacle with masking data and generating production-like data for testing.


Challenge 2: MongoDB’s JSON storage format 

MongoDB’s JSON storage format poses another data masking challenge. The JSON format houses various forms of data, from names to license plate numbers and other types that are less easily quantifiable. These high-level nested document fields create complex hierarchies, which complicate the level of granularity needed to generate realistically represented test data.

Challenge 3: Time required to build

Even for a relational database management system (RDBMS), it takes a significant amount of time and resources to create an infrastructure capable of generating test data that perfectly mimics production data. MongoDB’s various formats and versions significantly increase that time. Your de-identification infrastructure requires generators that can track and mask each version and document format. Hardly a walk in the park.

Finding an Effective Data Masking Solution

Generating MongoDB data is challenging because an effective solution needs to:

  • Detect and locate PII across each document in the entire collection.
  • Mask the data according to type, even though key types may vary across document levels with the same key.
  • Provide complete visibility into your document collection to observe and check each document during the generation process.

Assume a sample analytics collection with multiple documents contains documents A and B with a plate_number key. However, the plate_number key in document A is an integer, while the one in document B is a string. An efficient solution should know that it needs to mask the integer plate_number key with an integer and the string key with a string.

How Tonic Masks MongoDB Data

Tonic enables aggregating document collections in MongoDB to de-identify sensitive information and generate realistic, useful document-based data while eliminating the risk of PII slipping through. The Tonic interface provides a comprehensive view of the entire data generation process.

Let’s take a look at how it works:

A schemaless data-capturing method

Tonic curbs the complexity of document storage databases by employing a schemaless data-capturing method. We create a hybrid document model representing the entirety of the documents in your collection, then transfer the model to lower environments — like your staging environment. After connecting, Tonic scans the source database and automatically creates this hybrid document while capturing all edge cases and leaving no room for PII leaks.

Granular NoSQL data

By mixing and matching our 50+ generators, the platform masks NoSQL data with a high degree of granularity to mirror the complexity of your data. Regardless of how unstructured or varied your document database is, Tonic can accommodate, masking several data types in a document — even within the same fields.

Then, Tonic’s user-friendly interface enables you to preview the newly masked data, giving you a comprehensive view of each document so you can refine your data along the way.

Consistent data across databases

Tonic’s cross-database support helps achieve consistent test data throughout your data ecosystem. By connecting natively to other database types like Redshift, PostgreSQL, and Oracle and matching input-to-output generated data types across these databases, Tonic produces realistic test data that preserves relationships across databases.

Additionally, organizations can use Tonic with Mongo as a NoSQL interface, freeing teams to mask data stored in homegrown, document-based DB solutions.

Now that we understand the value Tonic brings to the question of how to mask data for testing in MongoDB, let’s explore how to put it into action.

How to mask PII data in MongoDB with Tonic

Integrating with MongoDB is simple — Tonic can connect natively to MongoDB via an Atlas connection string.

First, create a destination database in MongoDB to store the generated data. We’ll call our database tonic_mongo_integration.

Then, log into the Tonic platform and create a workspace. The workspace stores your connection information, data jobs, and other relevant information.

Within your workspace configuration, enter your MongoDB Connection String for your source and your destination. These connection strings enable Tonic to grab the data from your source Mongo instance and then store masked data in your destination Mongo instance. Next, enter the name of your MongoDB Database that will hold the fake data.

Then, hop into the Privacy Hub in the left side menu. The Privacy Hub performs a scan of all the documents in your collection. It flags data that it identifies as sensitive and makes recommendations for which generators to apply to protect that data.

You can apply generators directly within Privacy Hub or click into the Collection View in the left side menu to see your hybrid document in full.

In the Collection drop-down menu, select customers. This gives a preview of the collection’s fields. You can set the view mode as Single Document, which then shows each document in your collection.

Alternatively, you can view a Hybrid Document that shows all of the fields that appear across the documents in the collection.

Once you’ve applied the generators you need to safely and realistically mask your sensitive data, it’s time to generate. Click Generate Data in the upper right.

To view the status of your data generation, click Jobs in the left-hand side menu. We’re showing a Completed status, so let’s check our database and compare our MongoDB data.

Here’s the original data in our source DB. Note the various address fields.

And here’s the fake data in our destination DB. Notice the new, realistic address fields.

And there you have it! Real fake document-based data in MongoDB for all your testing needs. 

How to mask data for testing in MongoDB with Tonic

Tonic’s unique data masking solution for MongoDB enables developers to test their products efficiently with high-quality, realistic document-based data.

But don’t take our word for it. Check out what eBay had to say on the eBayTech blog about Tonic’s role in creating high-quality NoSQL data for their staging environments. If you’re looking for a similar solution for your NoSQL data, let’s chat.

*** This is a Security Bloggers Network syndicated blog from Expert Insights on Synthetic Data from the Tonic.ai Blog authored by Expert Insights on Synthetic Data from the Tonic.ai Blog. Read the original post at: https://www.tonic.ai/blog/how-to-mask-data-for-testing-in-mongodb

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Beyond Open Banking – Exploring the Move to Open Finance – Finextra Research

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

  • How far along is the financial services industry on the journey from open banking to open finance?
  • As the drive to digital increases, the barriers to entry lower. What are the modern data architecture changes needed to not just keep up with open, embedded ecosystems, but also increased competition?
  • What do companies operating under open finance rules need to consider when sharing financial data with third-party providers?
  • And what are the technical challenges financial institutions are facing in the open era? How can they overcome legacy systems to embrace the open finance revolution?
  • Regulations like FiDA are adding another layer of complexity. How can institutions unlock the potential of open data while adhering to the highest regulatory standards?

The financial industry has made significant progress in financial data access through open banking and progress will expand beyond payments with regulations like Payment Services Directive 2 (PSD2) to what is explored in the PSR and PSD3. Open banking has marked the beginning of this new era, and now the shift toward open finance is the essential next phase in this transition, with incoming regulations like Financial Data Access (FiDA) covering new areas including mortgages, pensions, investments, and savings.

While these initiatives hold promise, a successful open finance framework will require many financial institutions – who still rely on outdated systems and processes that may not natively support the flexible data access requirements of FiDA – to review their data architectures. If institutions can’t adhere to strict geographic data regulations (with certain jurisdictions enforcing that data remains within specific regions), guarantee real-time data availability, or scale in real-time to handle increased API traffic, further data modernisation efforts will be required.

How can financial institutions navigate this complex web of open architecture, flexible infrastructure, data requirements and regulation? Modern ecosystems require modern solutions, so organisations need to re-think their approach to data, not just to be able to facilitate open finance, but also to stay relevant in an increasingly competitive landscape.

Sign up for this Finextra webinar, hosted in association with MongoDB, to join our panel of industry experts who will discuss how the move to open finance can be managed effectively with the right data architecture in place.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DBHawk Flies with Text-to-SQL, SOC 2 Compliance – Datanami

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Customers will be able to interact with their database using natural language thanks to the new text-to-SQL function in DBHawk, a database access tool developed by Datasparc. The company also announced its SOC 2 Type II compliance and an expansion of its partnership with IBM.

DBHawk is Datasparc’s handy database tool that lets different users accomplish a range of different database tasks. For instance, data analysts can use it to write and execute SQL queries against dozens of databases, including relational and NoSQL databases. Data engineers can use it to perform joins and schedule SQL queries. Administrators can also use it to create tables or views, among other capabilities.

Its new text-to-SQL feature lets users interact with SQL databases using natural language. It uses AI to convert the natural language query into SQL, which is then executed against the database. Datasparc says the feature will be useful for expanding database access to people who aren’t SQL experts.

Datasparc showcased the new text-to-SQL capability at the recent PASS Data Community Summit, which took place last month in Seattle, Washington. “We are thrilled to share our latest advancements in AI-powered data analytics at the PASS Data Summit,” said Datasparc CEO Manish Shah. “Our text-to-SQL AI feature is a game-changer, and we can’t wait to demonstrate its capabilities to the data community.”

The San Diego, California company also announced that it recently obtained SOC 2 Type II compliance, which indicates that it passed an audit by the American Institute of Certified Public Accountants (AICPA) that tested its privacy and security controls. DBHawk is available as a software as a service (SaaS) product, which makes the SOC 2 Type II certification especially important. It’s also available as an enterprise product that customers can install on-prem or in virtual private cloud (VPC) environments.

Lastly, Datasparc announced it has expanded its partnership with IBM. Becoming an IBM Partner Plus shows that its dedicated to supporting IBM customers, in particular customers running the z/OS mainframe and LUW (Linux, Unix, and Windows) versions of the Db2 database.

Related Items:

DBHawk Partners with Microsoft Azure, Lands Patent for Sensitive Data

DBHawk Enjoys Growth in the Cloud

Web-Based Query Tool Touches Multiple DBs


Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Introduces Local Emulator for Azure Service Bus Wanted by Developers for Years

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

In response to developer feedback, Microsoft launched a local Azure Service Bus emulator. According to the company, the emulator promises to simplify the creation and testing of Azure Service Bus applications by offering a localized environment free from network or cloud-related constraints.

The Azure Service Bus is a managed message broker enabling reliable application communication. Its features include queues and topics for efficient handling, load balancing, transactional reliability, and safe data routing for decoupling services.

Despite its robust capabilities, developers often face challenges testing against cloud-based Service Bus instances due to latency, costs, and cloud dependencies. This local emulator addresses these hurdles head-on.

The company designed the emulator with developer convenience in mind, offering several benefits like:

  • Optimized Development Loop: Developers can test and iterate quickly without relying on cloud deployments, drastically reducing the development cycle time.
  • Cost Efficiency: Since the emulator runs locally, it eliminates cloud usage costs for testing and development scenarios.
  • Isolated Environment: Local testing ensures no interference from other cloud-based activities, allowing precise troubleshooting and debugging.
  • Pre-Migration Testing: Developers can trial Azure Service Bus using their existing AMQP-based applications before committing to a full cloud migration.

The emulator is platform-independent and accessible as a Docker image from the Microsoft Artifact Registry. Developers can deploy it quickly using docker compose or automated scripts available in Microsoft’s Installer repository.

While the emulator replicates much of the Azure Service Bus’s functionality, some features are unavailable:

  • Azure-specific integrations like virtual networks, Microsoft Entra ID, and activity logs.
  • Advanced capabilities like autoscaling, geo-disaster recovery, and large message handling.
  • Persisted data: Container restarts reset data and entities.

Furthermore, the emulator is tailored for local development and lacks several high-level Azure Service Bus cloud service features. It does not support a UI portal, visual metrics, or advanced alerting capabilities.

The emulator enforces quotas like the cloud service, such as:

  • Maximum of 50 queues/topics per namespace.
  • Message size capped at 256 KB.
  • Namespace size is limited to 100 MB.

Configuration changes must be pre-defined in config.json and applied before restarting the container.

Developers have long anticipated the local Service Bus emulator for a while. Vincent Kok, a freelance .NET developer, wrote in a post on LinkedIn:

Initially, Microsoft rejected the idea of setting up a local development for Azure Service Bus. The official answer from Microsoft was to use cloud instances of Azure ServiceBus. However, this approach requires each developer to create their own Service Bus namespace to ensure isolated development and testing. Alternatively, developers can share a single Service Bus namespace, but this introduces the risk of messages published by one developer being consumed by another, which is not very practical.

And:

Today, six years after that GitHub issue was first opened, the wait is finally over! Microsoft has released a local emulator for Azure Service Bus, enabling developers to build and test applications locally without needing to spin up cloud instances of Service Bus.

Furthermore, on X, Dave Callan, a Microsoft MVP, tweeted:

It’s so amazing that this is finally here.

We can use the emulator to develop and test code against the service in isolation, free from cloud interference.

Lastly, the emulator is compatible with the latest service bus client SDKs.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Swiss National Bank Buys 2,300 Shares of MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Swiss National Bank increased its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 1.1% in the 3rd quarter, according to the company in its most recent 13F filing with the Securities & Exchange Commission. The institutional investor owned 217,700 shares of the company’s stock after acquiring an additional 2,300 shares during the quarter. Swiss National Bank owned about 0.29% of MongoDB worth $58,855,000 as of its most recent SEC filing.

Other large investors have also recently modified their holdings of the company. MFA Wealth Advisors LLC acquired a new stake in MongoDB in the 2nd quarter worth approximately $25,000. J.Safra Asset Management Corp lifted its stake in shares of MongoDB by 682.4% during the second quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock worth $33,000 after buying an additional 116 shares during the period. Quarry LP grew its holdings in shares of MongoDB by 2,580.0% during the second quarter. Quarry LP now owns 134 shares of the company’s stock valued at $33,000 after buying an additional 129 shares during the last quarter. Hantz Financial Services Inc. acquired a new position in shares of MongoDB in the 2nd quarter valued at $35,000. Finally, GAMMA Investing LLC increased its position in shares of MongoDB by 178.8% in the 3rd quarter. GAMMA Investing LLC now owns 145 shares of the company’s stock valued at $39,000 after acquiring an additional 93 shares during the period. Institutional investors own 89.29% of the company’s stock.

Insider Transactions at MongoDB

In other news, CRO Cedric Pech sold 302 shares of the business’s stock in a transaction that occurred on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total value of $77,387.50. Following the sale, the executive now directly owns 33,440 shares of the company’s stock, valued at approximately $8,569,000. The trade was a 0.90 % decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available at this hyperlink. Also, CAO Thomas Bull sold 1,000 shares of the company’s stock in a transaction that occurred on Monday, September 9th. The stock was sold at an average price of $282.89, for a total transaction of $282,890.00. Following the completion of the transaction, the chief accounting officer now owns 16,222 shares in the company, valued at approximately $4,589,041.58. This represents a 5.81 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 25,600 shares of company stock worth $7,034,249 in the last 90 days. Insiders own 3.60% of the company’s stock.

Analyst Upgrades and Downgrades

A number of research analysts have recently weighed in on MDB shares. UBS Group increased their price objective on MongoDB from $250.00 to $275.00 and gave the company a “neutral” rating in a report on Friday, August 30th. Mizuho lifted their price objective on shares of MongoDB from $250.00 to $275.00 and gave the company a “neutral” rating in a research note on Friday, August 30th. Needham & Company LLC increased their price objective on shares of MongoDB from $290.00 to $335.00 and gave the stock a “buy” rating in a research note on Friday, August 30th. Wells Fargo & Company boosted their target price on shares of MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a research report on Friday, August 30th. Finally, Wedbush raised MongoDB to a “strong-buy” rating in a research report on Thursday, October 17th. One equities research analyst has rated the stock with a sell rating, five have assigned a hold rating, nineteen have assigned a buy rating and one has assigned a strong buy rating to the company. Based on data from MarketBeat, the company has a consensus rating of “Moderate Buy” and a consensus target price of $336.54.

Check Out Our Latest Report on MongoDB

MongoDB Stock Performance

NASDAQ MDB opened at $289.15 on Wednesday. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. The stock’s fifty day moving average is $278.06 and its two-hundred day moving average is $273.04. The firm has a market capitalization of $21.36 billion, a P/E ratio of -95.74 and a beta of 1.15. MongoDB, Inc. has a fifty-two week low of $212.74 and a fifty-two week high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Thursday, August 29th. The company reported $0.70 earnings per share for the quarter, beating the consensus estimate of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The firm had revenue of $478.11 million for the quarter, compared to analysts’ expectations of $465.03 million. During the same quarter in the previous year, the company posted ($0.63) EPS. The company’s revenue was up 12.8% compared to the same quarter last year. On average, research analysts anticipate that MongoDB, Inc. will post -2.39 earnings per share for the current year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

These 7 Stocks Will Be Magnificent in 2024 Cover

With average gains of 150% since the start of 2023, now is the time to give these stocks a look and pump up your 2024 portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET MAUI 9 Launched with Better Performance, New Controls

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

On November 12th, Microsoft presented the .NET MAUI 9 in its final form. This version brings two new controls (HybridWebView and TitleBar), a slew of improvements throughout the framework, free SyncFusion controls and a Xcode sync tool for Apple-specific files. The performance and stability of the entire framework has been enhanced.

MAUI is an acronym that stands for Multiplatform Application UI. According to Microsoft, it’s an evolution of Xamarin and Xamarin Forms frameworks, unifying separate target libraries and projects into a single project for multiple devices. Currently, MAUI supports writing applications that run on Android 5+, iOS 12.2+, macOS 12+ (as Mac Catalyst), Samsung Tizen, Windows 10 version 1809+, or Windows 11. The new versions bumps the minimum Apple devices support from iOS 11 and macOS 10.15 in .NET MAUI 8.

The .NET MAUI 9 journey to the GA (general availability) version started with the Preview 1 in February 2024. A new preview was launched roughly every month, plus two RC (release candidate) versions were made available in September and October. These frequent releases caught bugs and performance fixes that were included in the final version.

The first of the newly added controls, HybridWebView, allows developers to host HTML, JavaScript and CSS content within a WebView control, but with a communication bridge between the web view and the MAUI application code in .NET. On the JavaScript side there is a HybridWebViewMessageReceived event and SendRawMessage method. On the .NET side of the application there is a RawMessageReceived event and a SendRawMessage method on the control.

The second of the new controls, TitleBar, allows developers to create custom title bars in their application. For the moment, this control is only supported on the Windows platform, with Mac Catalyst support coming ‘in a future release’. The title bar control is then set to the parent Window object using the Window.TitleBar property.

While there are only two new first-party controls in .NET MAUI 9, the recent partnership with SyncFusion has added 14 new free controls from the vendor to MAUI as a package. The new MAUI version adds a sample application to the MAUI App template, which showcases how to use several of the contributed controls, together with recommended practices for common app patterns.

As for performance and stability improvements, one of the significant changes was the complete re-implementation of CollectionView and CarouselView controls on Apple devices. The new implementation requires a code change in the root MauiProgram class.

The new version also brings several deprecated features. The most important one is the Frame control, which is marked as obsolete and should be replaced with the Border control. In addition, the Application.MainPage property is replaced by setting the Window.Page property to the first page of the app.

It is worth noting that just two days after the launch, a Service Release (SR) patch was released with the version 9.0.10 of the framework. The SR version adds small fixes to the GA code. It could be in response to the comments of users on social networks, complaining that upgrading to .NET MAUI 9 breaks Visual Studio or the build process. On the other hand, developers like Claudio Bernasconi are stating that “MAUI is heading in the right direction”.

Readers can refer to GitHub official MAUI repository for complete release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.