Mobile Monitoring Solutions

Search
Close this search box.

The Deno Team Releases JSR, a New JavaScript Package Registry

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

The Deno team recently beta released JSR, a new JavaScript registry that strives to better fit the current needs of modern development and unify a fragmented JavaScript ecosystem. In particular, JSR embraces ESM (JavaScript native modules), natively accepts TypeScript packages, and supports major JavaScript runtimes (e.g., Node, Deno, Bun, browsers, miscellaneous serverless environments).

The npm package manager, originally released in 2010, was originally designed around Node.js, the CommonJS module system, and vanilla JavaScript. Fast forward 15 years, JavaScript now has a native module system (ESM), TypeScript has become the main choice for typed JavaScript, and a test bed for new JavaScript language features. Importantly, JavaScript is no longer limited to the browser and Node.js. Cloud providers often run their own optimized JavaScript runtime. Deno and Bun are also growing as alternatives to Node.js, revisiting key assumptions and focusing on developer experience.

JSR, the newly released JavaScript registry is free and open-source, supersedes CommonJS modules with ESM, natively accepts TypeScript packages, and as a goal seeks to improve on the developer experience, performance, reliability, and security of npm.

JSR’s documentation also describes cross-runtime packages as a design goal:

The goal of JSR is to work everywhere JavaScript works, and to provide a runtime-agnostic registry for JavaScript and TypeScript code. Today, JSR works with Deno and other npm environments that populate a node_modules. This means that Node.js, Bun, Cloudflare Workers, and other projects that manage dependencies with a package.json can interoperate with JSR as well.

JSR strives nonetheless to reuse the npm ecosystem by allowing JSR packages to depend on npm packages:

JSR is designed to interoperate with npm-based projects and packages. You can use JSR packages in any runtime environment that uses a node_modules folder. JSR modules can import dependencies from npm.

JSR also uses a package scoring system to nudge package publishers toward best practices in code distribution. For instance, a ranking score rewards packages that include comprehensive JSDoc documentation on each exported symbol (used to automatically generate package documentation). The ranking score includes other factors such as the presence of optimal type declarations for fast type-checking and the compatibility with multiple runtimes.

Developers are encouraged to review the release note for miscellaneous examples of publishing flows. For instance, a package creator publishing a TypeScript package with JSR and Deno needs to populate at least three files: a jsr.json metadata file, the TypeScript source files for the package, and a README.md file providing an overview of the package. The jsr.json file would go as follows:

{
  "name": "@kwhinnery/yassify",
  "version": "1.0.0",
  "exports": "./mod.ts"
}

The exports field specifies the package modules that are exposed to the package consumers.

The package would then be published in a Deno environment with deno publish and in a Node.js environment with npx jsr publish.

The Deno standard library was recently made available on JSR. Developers can review the package documentation guidelines provided online in order to optimize their package ranking score. The Deno team additionally published an in-depth blog post on how they built JSR.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


University of Washington AI-Powered Headphones Let Users Listen to a Single Person in a Crowd

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

“Target speech hearing” is a new deep-learning algorithm developed at the University of Washington to allow users to “enroll” a speaker and cancel all environmental noise surrounding their voice.

Currently, the system requires the person wearing the headphones to tap a button while gazing at someone talking or just look at them for three to five seconds. This directs a deep learning model to learn the speaker’s vocal patterns and latch to it so it can play it back to the listener even as they move around and stop looking at that person.

A naive approach is to require a clean speech example to enroll the target speaker. This is however not well aligned with the hearable application domain since obtaining a clean example is challenging in real world scenarios, creating a unique user interface problem. We present the first enrollment interface where the wearer looks at the target speaker for a few seconds to capture a single, short, highly noisy, binaural example of the target speaker.

The key during the enrollment step is that the wearer is looking in the direction of the speaker, so their voice is aligned across the two binaural microphones while other interfering speakers are likely not aligned. This example is used to train a neural network with the characteristics of the target speaker and extract the corresponding embedding vector. This is then used with a different neural network to extract the target speech from a cacophony of speakers.

According to the researchers, this constitutes a significant step forward compared to existing noise-canceling headphones, which can effectively cancel out all sounds but not selectively pick speakers based on their speech traits.

To make this possible, the team had to solve several problems, including optimizing the state-of-the-art speech separation network TFGridNet to make it run in real-time on embedded CPUs, finding a training methodology to use synthetic data to build a system capable of generalizing to real-world unseen speakers, and others.

Shyam Gollakota, one of the researchers behind “semantic hearing”, highlights that their project differs from current approaches to AI in that it aims to modify people’s auditory perception using on-device AI without relying on Cloud-based services.

At the moment, the system can enroll only one speaker at a time. Another limitation is that enrollment will succeed only if no other loud voices are coming from the same direction, but the user can run another enrollment on the speaker to improve the clarity if they are not satisfied with the initial result.

The team has open-sourced their code and dataset to facilitate future research work to improve target speech hearing.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Building a Rack-Scale Computer with P4 at the Core: Challenges, Solutions, and Practices in Engineering Systems on Programmable Network Processors

MMS Founder
MMS Ryan Goodfellow

Article originally posted on InfoQ. Visit InfoQ

Transcript

Goodfellow: First off, I want to pose a few questions. What are programmable network processors? Are they potentially useful to you? If so, what’s it like to actually program them and engineer systems around them? These are the questions that I’d like you to be able to have at least partial answers to, after watching my talk. I work at a company called Oxide Computer Company. The picture that you’re looking at on my title slide here is a picture of our computer, it’s a rack-scale computer that we’ve designed from the ground up, holistically thinking about not building a bunch of computers and putting them into a rack, but designing an entire rack together. At the core of that the networks that bind everything together inside of that rack, are built on programmable processors. I’m going to be talking about some of the challenges in working with programmable processors, some of the solutions that we’ve come up with, and just trying to give you a sense for, what’s the level of effort involved? Is this a path that’s going to be interesting for me? Can I leverage this in my own infrastructure or my own code?

Fixed Function Network Hardware

First, I’m going to start out with what programmable processors are not. I’m going to use the word network processing unit or the acronym NPU quite a bit. That’s what I’m going to use to describe these, both fixed function processors and programmable processors. A fixed function NPU comes with a whole lot of functionality. These come from vendors like Broadcom, from Mellanox, from Cisco, and they’re really great. They come with tons of features. You can just buy one, build your network on top of it. They come with a very much a batteries-included approach. When you buy a switch from Cisco, or Arista, or Mellanox, it’s going to come with an operating system that has drivers for all the hardware that you need. It’s going to have implementations of all the network protocols that need to exist on top of that piece of networking hardware for you to be able to build around it. It’s a pretty wonderful thing. If you look at the programming manuals for these NPUs, they’re massive. They’re up to 1000 pages long, some of them from Broadcom, I believe, these days. The reason for that is the level of investment that it requires to actually make one of these networking chips we’re talking about, in the neighborhood of $100 million to go from no networking chip to a full-blown networking chip that can write terabit speeds with all the protocols that we expect to be running today. When they design these chips, they try to pack as much functionality as they possibly can so they can get the largest cross section of audience for people that need to actually have this level of performance in their network with all the protocols that they need to have. That clearly comes with a tradeoff. None of these features come for free. You have to spin silicon on absolutely everything that you’re going to implement on the chip. If tunneling or routing is really important to you, and something like access control lists, or NAT is not super important to you, you might not be able to use that full chip, because some of that silicon is allocated to the other features that are on that chip. It’s really a tradeoff here.

Programmable Network Hardware

What does this look like for programmable NPUs? When we look at the programmable NPU, it’s a blank slate. That’s in the name, it’s programmable. We write the code that’s actually going to run on this network processing unit. That has a lot of up stack ramifications, because we’re defining the foundation of the network, how every single packet that is going through this network processing unit is actually going to be treated. That means everything up stack from the drivers that are in the operating system to all of the protocol daemons and the management daemons that are up in user space in the operating system, all of that we have to participate in. Some of it, we have to write the complete thing. We might have to write drivers from scratch. We might get a software development kit from the vendor that has produced this network processing unit where we can take some of that. Maybe they have a Linux reference implementation and we can build on that for the operating system we’re working with, or directly use it in Linux with the modifications that we need. There’s a ton of work that needs to be done to actually be able to leverage these effectively to integrate them with the operating system’s networking stack and make sure all the protocol daemons that we need to function are actually functioning.

Why Take on All the Extra Work?

This begs the question, why take on all the extra work? When we implement whatever network functions are important to us, those functions don’t exist in a vacuum. If we want to improve the performance of something, or if we want to implement something new, we still have to have all of those other network functions around the ones that we’re concerned with, working well. We’re not going to come at this and do better than Cisco or Arista, who have been doing this for 30 years, just because we feel passionate about this and want to do this. The question is, what is the why here? Why would I want to do this? Why would all this extra work actually be worth it?

Flexibility (Line-Rate Control Loops)

The first thing that a lot of people think of when you think of anything that’s programmable, whether it’s, if we think back to the transition from fixed function GPUs, when it’s programmable GPUs, and we have things like CUDA and things like that, and OpenCL. Now with networking, we have a programming language called P4 that we’re going to be talking about quite a bit, the first thing that comes to a lot of people’s minds is flexibility. I can now do the things that I want to do. If you’re really trying to innovate in the networking space, and your innovation requires that you touch every packet going across a terabit network, you’re not doing that in the operating system. You’re not using something like DPDK, or Linux XDP to schlep all these packets up to the CPU, do whatever processing you’re going to do, and send them back down to your network card. If you’re running at these speeds, if you want to do innovation at that level, then you need specialized hardware to be able to do that. What you’re looking at here is a very brief picture, or a high-level overview of something that we’ve done in Oxide. One of the things that we really wanted to nail was the routing protocol that interconnects everything in our racks. Something about the physical network inside of our rack, and between racks is it’s multi-path. For every single packet that comes through our network processors, we have at least two exit routes that it can take to get to its final destination. That has great properties for things like resiliency and robustness and fault tolerance. It also means that we have to make that choice for every single packet that goes across our network. How do we make that choice? Traditional wisdom here is something called ECMP, or equal-cost multi-path, where you basically have a hashing function, where your hashing function looks at the layer 4 source and the layer 4 destination of the packet. Performs the hash on those, and the output of the hash is an index into the potential ports that you could send this out. There are nice statistical properties of this hashing function that basically say it’s going to have a uniform distribution. In the limit, you’re going to have a nice load balance network. For those of us who have run large scale data center style networks, we know that this doesn’t always work out in practice. The intersection of the traffic that is going through these hashing functions with how they actually behave is not in fact uniform. The fault tolerance properties of them leave quite a bit to be desired, because they’re essentially open loop control. What we wanted to do for our rack, is we wanted to have closed loop control, we wanted to take a look at every single packet, put a timestamp on it inside of one of our IPv6 headers, or all IPv6 in our underlay network. IPv6 has extension headers, and so we stuck some telemetry essentially in every single packet going through the network. When that packet would get to its destination, the destination host would look at the telemetry, say, do I need to send an acknowledgment to update basically what the end-to-end delay looks like from the point at which this telemetry was added, to the point at which it was received? It sends that acknowledgment back if necessary. Then our switches are programmed to recognize those packets, and then say, ok, so to this destination going out this particular path that I chose, this time, this is the latency. We’re going to probabilistically always choose the path of least latency. We’re going to be able to detect faults and react to them inside of the round-trip time of our network, which is actually really cool. It’s really advancing the state of the art in multi-path routing. There’s no way we could have done that without programmable processors for the network. This is one of the advantages that we have in terms of flexibility. They’re completely programmable, so the sky is the limit for what you can do.

Fit for Purpose (Spending Silicon on what Counts)

The topics that I’m going to go over here, I’m going to go over three major ones. I view them in increasing levels of importance. The flexibility is important, but we also have a very rich ecosystem of fixed function processors. If you can’t get the flexibility that you need, it’s not the end of the world, so I viewed that as like the least important thing on this list. Moving up the important stack, we have fit for purpose resource usage, or what we call spending silicon on what counts. That’s what I was talking about at the very beginning of the talk, when we have these very nice processors with all these functions, but are these functions taking away silicon from the functions that I really care about? Earlier in my career, a very interesting example of this, maybe depressing example of this is when we were deploying EVPN networks for VXLAN. The early days of VXLAN were tough all around. We had some switch ASICs. One of the difficult things about VXLAN is you’re basically sending a layer 2 network over a layer 3 network, and layer 2 networks have broadcast domains in them. When you’re sending out packets that can have broadcast properties in them, you need to replicate those packets as they go across certain points in the network. The switches handled this in the early days. Now they use multicast. In the early days, it was mostly something called head-end-replication. There is silicon in the switch to do head-end-replication, but that silicon is needed for other things in the switch, for things like NAT and things like that. While we had switches that implemented that head-end-replication feature, we couldn’t use the entire space that we knew that was there on the chip, so we actually ran into soft limits. When we got to a certain level of load on our networks, our networks just stopped functioning, which was a really bad place to be in. It wasn’t clear why they were stopping functioning at the time. Being able to understand what the capacity limits of your silicon is, exactly where your program reaches into those limits is a really powerful thing, and being able to just do what you need to do. The cycle times for these chips is pretty long. If you look at like Mellanox Spectrum, Broadcom Tomahawk, we’re on a three-to-five-year cycle for these things, and so they don’t come out very quickly.

Comprehensibility (Giving Designers and Operators Executional Semantics)

Then, finally, the most important thing is comprehensibility. If you’ve operated a large complex network, you’ve probably been at the place that this diagram is depicting. You have network or you have packets coming into your network, or maybe a particular device on your network. You know where they’re supposed to be going. There’s a place that they’re supposed to be, but they’re just not going there. They’re getting dropped, and you have no idea why. You look over the configurations of your routers and switches, you look at your routing tables, you look at your forwarding tables, everything lines up, everything looks good. You’re running tests. You’re calling the vendor. They’re asking you to run firmware tests. Everything looks good. Then we get to the solution, that is the most unfortunate solution in the world, we restart the thing. We take an even bigger hit in our network availability by restarting this whole thing, if it’s just some specific feature that’s not working. Then it starts working again, which is the worst thing in the world, because then we continue to lose sleep for the next three months thinking about, when is this thing going to happen again? There are other variations of this story. There’s the one where your network vendor sends you a firmware update that says they fixed it, but you have no insight into why it was fixed. You’re just trusting, but not able to verify that this fix is actually in place. It’s a very vexing position to be in.

What does this look like with programmable network processors? The bad news is the initial part of this doesn’t get any better. When your network goes down, it’s almost certainly going to be 2 a.m. on a Saturday. Support tickets are going to be on fire. People are going to be very angry with you. You have 16 chats that are all on fire trying to figure out what’s going on. None of that gets better with programmable networking. 2 a.m. on Saturday just somehow happens to line up with a critical timeframe for one of your business divisions that’s losing tons of work. None of that gets better. What does get better is what happens next. What happens next is when we have this packet that’s coming through our network, we have executional semantics for how our network actually works, because we wrote the code that underpins what’s running on the silicon itself. The code that I have on the right-hand side of this slide is just something that I’ve cherry picked from our actual code where we’re doing something called reverse path filter. The basics of reverse path filtering are anytime we get a packet and a router, we’re going to do a routing table lookup on the source address of that packet. See if we actually have a route to that packet. If we don’t have a route to that packet, we’re just going to drop it on the floor, because if we were to get a response downstream from us that we want to send the reply to, that source address, we wouldn’t be able to fulfill that request. This is a security feature that is commonly implemented in a lot of networking equipment. Our customers have asked for it, and so we’ve implemented it. The important thing about looking at this code is that we have several instances of this egress drop. We’re setting egress drop equal to true, meaning that this is our intention to drop this packet on the floor at this point in time. This is P4 code. If we look at our overall P4 that is running on the switch and look at all the places we’re intentionally dropping packets, then we can actually start to walk backwards from where are all the places that this can happen when I’m looking at a packet going function by function through my P4 code, and I know what the composition of this packet is, like what is actually going to happen. We can start to just look at this from a logical perspective instead of calling and pleading with our vendor perspective. This is an extremely powerful thing.

Comprehensibility (Debuggability, Tracing, and Architectural State)

The comprehensibility stuff doesn’t end there. In the Twitter performance talk, the speaker was talking about, it’s not really scalable to think about performance in terms of just looking at your code and thinking really hard about what is dogging my performance. The same thing is true for robustness and reliability in networking. While we can go through and read our P4 code and things like that, this is not really scalable. We might be working with other people’s code that we’re not intimately familiar with, or other people might be working with our code. We need to have tracing and debuggability tools with our networks. With the fixed function networks, we’re fairly limited in what we can do. A lot of the NPUs that are coming from Arista, and Cisco, and folks, they allow you to basically take a port on your switch, and say, I want to mirror this port to the CPU and get all the packets from the CPU. When we think about how this system is architected from a hardware perspective, these ports are perhaps 100 gigabits. Maybe I need to look at 5 ports at the same time. Of course, when I need to look at things, they’re all under 80%, 90% load, so I have several 100 gigabits of traffic that I need to run through a trace tool like tcpdump. When I near that port to the CPU on that device, I’m coming through maybe like a PCI Express 4-lane link, maybe a 1-lane link, so we’re not shoving 100 gigabits through that. There’s going to be some buffer on the ASIC side to buffer packets as they come in, but it’s not going to keep up with multiple 100 gigabits of data coming through that. What we’re going to get is packet loss. One of the really nice things that we have about programmable NPUs is that we can actually put debugging code on these NPUs. We’ve done this at Oxide on our networking switches. When something is going sideways, we want to load this P4 program that can basically filter only the things that we’re interested in from these very high rate data streams coming through the ports, and then send that information up through our PCI Express port up to the host, where we can get a tcpdump like output. This is a big win for observability, when things are really going sideways.

Then, the desire to do tracing doesn’t end at tracing packets. This is like the go-to thing that we go to. As network engineers, we want to packet trace from a whole bunch of different places in the network to see what’s going on. Because we’re actually writing the code on these NPUs now and we have that level of understanding of what’s going on, it would be wonderful to have program traces. A technology that we lean on pretty heavily at Oxide for program traces, is something called DTrace, or dynamic tracing, which basically allows you to take a production program that understands how the instructions are laid out inside the binary for that program, and replace some of those instructions with the instruction itself, but also some instrumentation to see how your program is actually running. You can do function boundary tracing. If you’ve never used DTrace before, I’d highly recommend taking a look at it. It’s quite a magical piece of software for observability. What we’re looking at on the right-hand side of the screen here is a dynamic trace that I took from one of our P4 programs running that actually goes through and gives me statistics about every single function in my P4 program, every 5 seconds to tell me how often that this particular function was hit in terms of the execution of the program. When you have this, it shines a really bright spotlight on things that you know aren’t supposed to be happening. If I know I’m supposed to be doing layer 3 routing, and I know I’m supposed to have MAC address translation all set up and working for the next hop on that layer 3 route, and I see a lot of MAC rewrite misses. I know something’s wrong with my MAC tables, and I need to go look at that. This is a really good high-level tool for going in and being able to see what’s wrong.

Then, wrapping up the comprehensibility discussion is a continuation of this tracing discussion. The important part is not to be able to read the terminal text, but the text on the right-hand side of the screen. Something that we have with dynamic tracing that’s also really powerful is the ability to fire probes based on predicates. This terrible thing has happened in my network based on this predicate, now give me a dump with all of the architectural state of this processor. This is where we’re going to start to dig a little bit deeper into the hardware-software interface and the deep integration that hardware can have with software. Architectural state are things that come from the instruction set architecture for the processor that you’re executing on. What are all the registers looking like? What are the calling conventions for the stack? What does that stack look like? What does my stack trace look like? What does memory look like? We don’t have DRAM on network processors, but we have SRAM and we have more esoteric type of memory called TCAM. What does all that look like at the time at which my program has gone sideways, because I have the intent of my program and I have the execution reality of my program? Architectural state of the processor is what allows me to identify the delta between that intent and what’s actually happening. A lot of times in conventional programming, and we’ll see this in core dumps, when you have a crash in your operating system, you get a core dump that has all kinds of useful information in it. Architectural state is what allows that to happen. Back to the text in this slide, when this probe triggered, what I got was the packet composition that was running through my network pipeline. When the probe triggered, I got a stack trace of how exactly this packet flowed completely through my program. Then, at the end, the cherry on top is I got basically a heap on a table rewrite miss that told me exactly this is where your packet was dropped. This is the culmination of the where this gets better story for observability, is when things are going sideways, we have the ability to look in and see exactly what is happening with the mismatch between the intent of how our systems are running and how they’re actually running, and take real corrective actions that aren’t reboot the router, or ask for new firmware and pray that it’s going to actually solve the problem. This in my view is the most powerful thing about using programmable network processors is you can understand more about how your system works.

How and Where to Jump into the Deep End?

Some combination of flexibility, fit for purpose resource usage, or comprehensibility seems compelling to you, and you want to take the plunge into the deep end and start to work on programmable network processors. How and where do we start? I’m going to provide a little bit of insight from our experience at Oxide. One of the surprising things is that writing the actual data plane code is a very small piece of the pie. I sat down and tried to consider like, what were the relative levels of effort over the last three years we’ve been building this system, that we put into all of these things? The data plane code itself, the P4 code is like 5%, maybe. I think that’s probably being generous. It’s a couple 1000 lines of code, and our network stack is probably over 100,000 lines of code at this point. Around this, we have things like the ASIC driver in the operating system, which takes a ton of time and effort to be able to build. The operating system network stack integration is a huge piece. There are entire companies that are built around just this piece. If you remember, like Cumulus Linux, before they were taken over by NVIDIA. Their entire company was based on this, how do we integrate these ASICs from Broadcom and Mellanox, and places like that, into the Linux networking stack in a way that makes sense if you’re used to the Linux networking abstractions? This is a huge effort, and having to decide where you’re going to integrate and where you’re not going to integrate with the operating system. Do you need to integrate the operating system bridges? Do you need to take the VLAN devices, the VXLAN devices? Tons of decisions. Decision paralysis kicks in pretty quickly here, when you’re trying to define scope. Then things like the management daemons, the protocol daemons. Do you want to be able to keep the operating system’s protocol daemons that you’re using? Do you have to modify them? Do you want to rewrite some of them? Then, finally, integration as a big chunk. Twenty percent, I may be underselling the integration here, but making sure that this code that you write in P4 that is underpinning your network, is actually resulting in a functional network, end-to-end. It’s not good enough that it just works on a particular node. You have to build a whole network around this with all the routing protocols, all the forwarding protocols, everything that needs to work. Is it handling ARP correctly? Is it handling LLDP correctly? All these things need to work. You’re not in control of the endpoints, oftentimes, that are connected to your network, so you need to make sure that it’s robust.

Data Plane Code

With that, let’s dive a little bit into the data plane code. Not necessarily because it’s the smallest piece, but because it’s the piece that drives the rest of the work. It’s because this data plane is a blank slate, is why everything else has to be modified and becomes such a large amount of work. Over the next few slides, in this presentation, basically, we’re going to be writing a P4 program together. It’s a very simple programming language. We’re actually going to write a functional program in less than 100 lines of code that is actually maybe useful. I just made up this program for this presentation. It’s not a real thing that we actually use. It’s cool. Just a quick explanation of what this program is going to do. We’re going to make a little border security device. What this border security device is doing is we have four ports. The first port, port 0 is connected to web traffic that’s coming into our infrastructure. We all know what a wonderful place the World Wide Web is, and the very pleasant messages that we get from hosts on the internet. Then we have a web server farm that is meant to serve these requests. Because the World Wide Web is not actually a nice place, we have a traffic policing cluster that is looking at suspicious traffic to decide, should this traffic be allowed through to our web servers? Then we have threat modeling for traffic that we’ve decided is definitely bad, but we want to learn more so we can do defense in depth for when we see this type of threatening traffic, moving forward. We’re going to write a P4 program that can do this.

The first thing that we have in P4 programs, much like many other types of programming languages is data structure definition. P4 is centered around headers. It processes packet headers. It doesn’t do a whole lot of packet payload processing. We only need two headers for this program, at least for the first version of the program that we’re going to write. We need an Ethernet header and an IPv4 header, we’re not going to handle IPv6 here. I’m not going to go over networking 101 here, but just trust me that these are correct representations of Ethernet and IPv4 headers. The next thing that we need to do, so we have our header definitions, but now we need to actually parse packets coming in off the wire in binary format, into this header structure. What we have here is a P4 parser function, the function signature here. We have a packet_in, which is like our raw packet data. We have our out specifier. You have these inout specifiers that’s reminiscent of Pascal. Then we have a headers object, which is a more structured representation of headers. That’s what we’re going to be filling in as an out parameter to this function. Then we have a little bit of metadata that we’ll talk about later. Parsers are state machines. They always start with a special state called Start. In this start state, we are just extracting the raw data from the packet and placing it into headers.ethernet, which holds this Ethernet structure here. We’re taking the first 14 bytes from this raw packet data and just putting it into that more structured header. We don’t have to do any of the details of this, this is all taken care of by P4. Then we’re just going to have this conditional that says if the EtherType is the IPv4 EtherType, that little 16w on the leading edge of that, if you’ve ever programmed in Verilog, is just saying that this is 16 bits wide. We’re saying if this 16-byte value is equal to 0x800, which is the EtherType for IPv4, then we’re going to transition to the next state. If not, then we’re just going to reject the packet. There’ll be no further processing on this packet, and we drop it on the floor. Next, similarly, we have an IPv4 state. We’re extracting the IPv4 headers. Then we transition to a special accept state, which means we’ve accepted this packet in terms of parsing, and now we want to go process it.

The last stage for this is processing the packet headers. Basically, what we’re doing here is we have a new type of function called a control function. We take as input the output of our last function, which is this headers_t that has our Ethernet header and our IPv4 header in it. We have a little bit of metadata that we’re going to talk about later. Then we have three actions. Our first action forward, we are setting the egress port, so determining what port this packet is going to go out, to port number 1, which corresponds to our diagram on the left there to our web server traffic. Similarly, for our suspect and our threat action, we’re setting the ports to 2 and 3 to go to the traffic policing and threat modeling respectively. These actions are bound to packets through things called match action tables in P4. The match part is based on a key structure that we’re defining in our triage table here at line 58. We’re saying that the key that we want to match on is the IPv4 source address from the headers that we parsed. The possible actions that we’re going to take are suspect and threats. Then the default action, if we have no matches in this table, is just going to be to forward this packet on. Finally, we have an apply block, where we’re basically saying, if the packet is coming from port 0, which means it’s coming from the internet, then we’re going to apply this triage. Otherwise, it’s coming from one of our three internal ports so we just trust it, and we’re going to send it back out to the internet. That’s it. In about 70 lines of code, we’ve written a P4 program that can actually implement the picture of the diagram on the right.

Code Testing

Easy: define, parse, process, 1, 2 ,3. Not all that bad. How does this actually work? This is just code on some slides. How do we verify that this thing actually works? What even is the testing model for this? How do I sit down and write tests for this? In general-purpose programming languages, we write the test code in the same programming language we write our actual code in. It makes it very convenient to just sit down and iterate and write a whole bunch of tests and verify that the code that we’ve written is actually doing the thing that we want. We don’t have that luxury in P4. P4 is very far from a general-purpose programming language. There’s no imperative programming. There’s no sitting down. P4 doesn’t actively do things, it just reacts to packets. We don’t have this luxury on the right here. This is just some Rust code where we are writing some unit tests and then compiling and running our unit tests. We have this nice cycle. This brings me to the first challenge is, how do we build effective edit-compile-test workflows around data plane programming, without having to set up a bunch of switches that each cost like 20 grand, 30 grand a pop? Then every time we want to run a new topology, having to rewire those switches. We don’t want to do that, but we want an effective workflow. The solution that we came up for this is we wrote a compiler for P4. We’re a Rust shop, we do most things at Oxide in Rust. We decided that we’re going to write a compiler in Rust for P4 that compiles P4 to Rust. That way, we can write our test harnesses for our P4 code in Rust and have the nice toolkit that exists in Rust for doing unit tests and integration tests and things like that on our P4 code.

x4c Compiler Demo

I’m going to get a terminal up here. What we’re looking at in the top pane here is just the code that we wrote in the slides. Then our compiler comes in two formats. The first format is just a regular command line compiler. Our compiler is called x4c, so if we just run x4c against this border.p4, it doesn’t give anything out. The mantra is, no news is good news. This means that we have successfully compiled. If we change something to like IPv7, or something like that. Border.p4, let me go down to the parser. Then we’re going to just transition to IPv7, which is not a thing, and run our compiler over it. Then we’re going to get this nice little error condition saying, IPv7 is not defined, the basic stuff that we would expect from a command line compiler. This is not actually the most interesting thing about what we’re doing here. I said we wanted to do testing, so let’s actually do some testing. One of the really cool things about Rust is the macros system that we have in Rust, which allows us to basically call other code at compile time, and that code can influence the code that’s actually being compiled. Our P4 compiler doesn’t just emit Rust code, it actually emits Rust ASTs that can be consumed by the Rust compiler, so we can actively inject code into the Rust code that we are compiling. Here, I just have a library file where I’m using a macro saying, please ingest this P4 code, use our compilation library to create an AST, and just splat all of that generated P4 code right here in this file and give it a pipeline name of main. Then in my test code, which is in the same library, I can now create an instance of this pipeline main, and then just basically run some test over it. I can start to fill up my tables to find what sources of packet information I view as suspicious, then run some packets through it, and just see if the packets that I get out are the ones that I expect. We have a little piece of infrastructure in here that we call SoftNPU, or the software network processing unit that we use to execute the P4 code that we generated, because the P4 code needs things like ports and all this infrastructure than an NPU actually provides. We have a software version of that, that we are running here. What I’m going to do is I’m just going to run a unit test with my cargo nextest run. This is the test that exists in this program. Here, it’s got this little test adornment on the top. Just to show that there’s no tricks up my sleeve here, I’m not doing anything nefarious, let’s change part of this P4 code, let’s make a mistake. Let’s say that our suspect traffic is going to go out to port 3. Then I can just do my nextest run here. Now this code is essentially just going to sit here and hang because I have a synchronous wait hanging off of that port expecting the packet to come out. This test is just going to hang here. Obviously, if it was a real test, we would want this to do something besides hang and give us erroneous output after a timeout. We can change this P4 code back, run our test again. Everything should compile and run ok. This gives us a really powerful mechanism for iterating quickly over P4 code without having to set up a whole bunch of hardware and things like that, and just understand that the basic input, output relationships of the code that we’ve written, which is only 70 lines of code here, so it’s nothing big, but when you get to thousands of lines of packet processing, it becomes fairly intense.

Test End-to-End Network Functionality Before Hardware is Available

If I am sitting in the audience of my own talk, I’m looking at this going, so what? You sent some synthetic packets through a unit test generator, this is not a functional network. This is not real. I would agree with myself, this is very much not real. There are a bunch more things that functional networks actually have to have to work than just sending a packet in one port and going out the other. The next challenge is, how do we actually test that this P4 code can go into a real network device and implement a real network and do this with real packets from real machines with real network stacks? The solution that we came up with there is creating virtual hardware inside of a hypervisor. I said we were a Rust shop at Oxide. We also have a Rust based hypervisor VMM, called Propolis, that works on top of the bhyve kernel hypervisor, that we can develop emulated devices in. What we did was we developed an emulated network processing unit that exists inside of this hypervisor. The chart that you’re looking at here on the right-hand side of the slide is basically how this comes together. Inside the hypervisor, we have another variant of SoftNPU, that’s able to run precompiled P4 code. We take that Rust code that we compiled P4 code into, we create a shared library out of it. Then we’re able to load that shared library onto this SoftNPU device that exists inside of a hypervisor by exposing a UART driver in the kernel, that goes down to that hypervisor device. Then we can load the code onto that processor. Then actually run real packets through the ports that are hanging off of that hypervisor that use I/O.

Demo

In the next demo that I’m going to show, is we’re going to have four virtual machines that are representing the web traffic, the web servers, the traffic policing, and the threat modeling. We’re actually going to send real packets through our P4 code. What we’re going to do is, on the left-hand side of the screen, I have my web traffic, where I’m just going to start pinging from a source address of 10.0.0.4, to 10.0.0.1. This ping is not going to work initially, because our switching NPU is empty, there’s no program running on it, and so we have to load some P4. In the upper right-hand terminal, I basically have this libsoftborder.so that I have compiled from P4 into a Rust based shared library object. What I’m going to do is I’m just going to not load zeros, but I’m going to load the program, libsoftborder.so onto my ASIC through this UART device through a little program that I wrote, just to communicate through the operating system’s UART interface. It’s loading this program that we compiled. Now on the left, we see the pings are starting to flow from this packet. Now I’m going to start some tracing sessions on my server. I see on my server that I am getting these ICMP packets, which is what we would expect from ping. Then in my police and my threat modeling, I’m not seeing any packets yet. In this border admin program that I created, I created the ability to start populating this table that we’ve had in P4 for the match action to start to direct packets in different directions. If I do a borderadmin add-suspect, and we’re going to do 10.0.0.4. The source address was .4. We’re going to load this into the table. Now all of a sudden, we see traffic stopping from our web servers, we’re just seeing ARP traffic there now. We’re seeing traffic down at our policing servers start to kick in. If I remove this triage, then I’m going to see now the packets are starting to come back to where they were supposed to go originally. Then at some later time, if we decide that this is actually threatening traffic, then we can go here and add a threat model. Down here, we’re starting to see the traffic come in to the threat cluster for further analysis. This shows that this was actually somewhat of a useful program that we wrote and that it can actually translate packets. An interesting thing about this is, this is a little bit of a lie. The extent to which this is a lie is we didn’t implement ARP in the initial program. ARP is the Address Resolution Protocol. It’s when one computer says I have a packet to send to 10.0.0.1. I need to send an ARP request out to know what the layer 2 address is for that machine. We didn’t implement that. Because we didn’t implement that, nothing is going to work on this network. The moral of that story is that when you’re doing P4 programmable stuff, you’re on your own. You have to implement and build everything. You have to realize that in order to build functional networks, there’s a lot more work than just doing what you want to do. Our 70-line program essentially turned into a 109-line program, when I added ARP into this program to allow it to be able to work, in our actual real network stack situation.

Closing the Knowledge Gap at Hardware/Software Interface

The final challenge is the fact that all of the instruction set architectures for NPUs today are closed source. We can’t do a lot of the things that I was talking about with the SoftNPUs on real hardware based NPUs because we don’t have the architectural state that we need. We at Oxide are working very hard to change this. We’re starting to get traction with vendors and with the P4 community into making this happen.

Questions and Answers

Participant 1: You’re saying that P4 is not general purpose, and you can just execute anything in there. What’s the main limitation compared to general-purpose operational logic? The second question is more of the borders between who’s writing P4. Is it you? Is that the people you ship racks to? Who’s writing the code it ships?

Goodfellow: There are a lot of differences. There’s no loops, for example. There’s no code that just runs on its own, everything is in response to a packet coming through the system. Programming for P4 is a lot more like programming for an FPGA, than it is for a CPU, within reason, like there are obviously exceptions to this rule. There’s no such thing as a slow P4 program. Your program either compiles and it’s going to run at the line rate of the device that it’s running on. Because of that, it has to be very constrained, or it just doesn’t compile at all. The compiler will say, your program does not fit on the chip that you’re trying to fit it on. There’s no loops. There are constraints on how much code you can actually fit on these devices, because they’re extremely heterogeneous. It’s not like a pile of cores and a pile of RAM. It’s like five different types of ALUs that all serve different functions. The compiler statically schedules the instructions onto those ALUs. It’s not like superscalar, where you have a scheduler on the chip itself. It’s a much different model. You can’t just write code that autonomously executes and things like that.

Today, when we ship products, we are shipping P4 code, and that P4 code is available to our users. There are some difficulties with that, not technically, but in terms of NDAs and things like that. The compilers to go on a chip that we’re using are closed source, our customers don’t have access to the NDAs that provide them access to those compilers. We are trying to break down those barriers. We want to live in a world where our customers can recompile the code for whatever they want, because a part of our mantra is it’s their computer. They bought it. They have the right to do whatever they want. We’re trying to realize that world but we’re not quite there yet.

Participant 2: How do you get those general-purpose things to interface with P4, so populating that thing with [inaudible 00:45:39], being able to direct the traffic?

Goodfellow: We defined a very simple low-level interface. If we look at like, what a table key and what a table value are, we just basically serialize those in binary format, and then send them across a UART. We have some other development environments that have Unix domain sockets that we send these over. We just basically take that IPv4 or IPv6 address, chunk it up into bits. Then if it has other keys in there, like it has the port, we’ll take the big Endian representation of the integer and we’ll just basically line all these things up in a binary chunk and send it over to the ASIC, and then the soft ASIC consumes that and populates the tables. They’re a much more fancy runtime. If you go on to p4.org, and look at the specifications, there’s a well-defined P4 runtime that has gRPC interfaces, and Apache Thrift interfaces and things like that. We do implement some of those things. For this, since it’s a development substrate, and we may shift between our high-level runtimes, we wanted to keep things very low level and then map those higher-level runtimes on to just the low level piles of bits interface.

Participant 2: Is the gRPC runtime one of these interfaces?

Goodfellow: We’re like a full solutions vendor, if you’re selling a switch that’s P4 capable, the P4 runtime defines a set of gRPC interfaces that can program that switch. Then high-level control planes have something stable to be able to hit without having to reinvent what the control plane looks like for every single P4 switch.

Participant 2: It’s a general purpose processor in the switch [inaudible 00:47:23].

Goodfellow: The gRPC is not running on the NPU. It’s running on a mezzanine card that probably has an Intel Atom or a Xeon D or something like that on it.

Participant 3: All the tooling and observability stuff you all built is really cool. If another company was trying to also build P4 things, are there other general toolings they would use or would they have to use the [inaudible 00:47:52], or do you provide that as well.

Goodfellow: To my knowledge, this type of observability tooling is unique in the space. Everything we built is open source and available, so those companies could pick up what we’ve created. We are somewhat tied in some of the tools we create to the operating system that we use, which is called illumos. We don’t actually build our infrastructure on Linux. We’re very tied to DTrace for a lot of the observability tooling. If folks wanted to pick that up, it’s open source and available and out there, or they could use it as inspiration to build what they want to build.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Addressing the Gender Imbalance in Technical Leadership

MMS Founder
MMS Neria Yashar

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today I’m sitting down with Neria Yashar from Wix.

Neria, welcome. Thanks for taking the time to talk to us today.

Neria Yashar: Thank you for having me, Shane.

Shane Hastie: My normal starting point with these conversations is who’s, Neria?

Introductions [01:08]

Neria Yashar: Okay, so let me tell you a bit about myself. I’m a software engineer, a backend engineer. I started my career actually as an electrical engineer more than a decade ago, deep in the hardware field. And later on I’ve gradually shifted my focus into the software world. I’ve worked for five years at a company which was acquired by Nvidia, then another five years at Apple and then Facebook, and now I’m in Wix for more than a year. So as you can see, I’m really into big tech corporates, and when I’m not coding, I’m a mother of two young boys and really passionate about supporting women in various activities, especially women in tech, which is something that I’m sure we’ll talk about later on.

Shane Hastie: Indeed. In fact, let’s dig into that. You’re very, very active in the Women in Tech community and the Woman in R&D community. Tell us a little bit about those activities.

The imbalance in women in engineering management roles [02:05]

Neria Yashar: I didn’t mention before, but I work in Wix, which is a company that provides a platform for building websites and Wix it’s a company that really supports women and really is a pro-diversity. So yes, I take part in many activities.

So Wix has two main activities that are for women. The first one is Women in Tech, which is a form that I’m not a main part of that I’m aware of. It’s a form that its goal is to help increase women’s presence in tech and inspire both women in the hope of making an impact on future generations and promoting gender equality. So basically what they want to do is to see more and more women in R&D management. As you are probably aware of, women are a minority in R&D. The numbers are about 20%, and as you go higher to higher positions, you see less and less women.

So just imagine that you have a group of 10 people and I’m one of them. So if you’ll be in the group, you’ll have seven people to choose who you want to be friends with, and I will have only one woman with me, and of course men and women can be friends, but if you are talking about a real deep connection so mostly it happens between men and men and between women. And this is from what I know. So this is the first activity.

The second one is Women in R&D community, which is a community that I founded about a year ago. So I can tell you more about it.

Shane Hastie: Please do.

Neria Yashar: Okay. So about a year ago I came to Wix, and I’ll share a personal story with you.

So when I was studying electrical engineering in Technion in Israel, we were only 18% women. So it was really rough and obviously all the guys had this brotherhood and women couldn’t really be a part of that brotherhood. So even though many guys were good friends of mine, the connection was not as strong as the one that they had with each other. So sometimes you can feel a bit lonely. And later on my old jobs, I really loved getting involved in women’s activities. And when I came to Wix, there wasn’t any activity like this. So I decided I’ll do something about it because I love it so much and I think it’s so important. So I reached out to some brilliant women like Nir Orman, which was also in your podcast a few months ago, and Aviva Peisach, and together we started building our own community. So it’s something that I’m really happy that I did.

Shane Hastie: So what makes that community and then what do you do in that community? What are the activities?

What make the Women in R&D community special [05:08]

Neria Yashar: First of all, we take this community really seriously. It’s not like a hobby or after time activity. It’s like a really, really major project for us. And we treat it the same way that we treat other projects we have at our job. So for instance, at the beginning we did some serious digging and sending out survey to the women engineers to find out what they want. It’s like a research you would do when you get a project at your job. And then we step down and we brainstormed about potential activities like I would do with the project that I would get in my everyday job. And then we got dozens of ideas. So it came down to choosing the best ideas to do, and we have two main goals, professional growth and networking. Those goals, we got them from asking the women in the survey like, “Why should you come? What are you looking for?” So it’s professional growth and networking. And we set goals and activities we want to do last year and this year.

So the activities, I just want to clarify it, the activities are not girly activities. They’re activities that are for both gender and we aim to increase the confidence and the knowledge of the women that comes to the activities. And I personally, I’m really into knowledge sharing, so I always try to initiate meetups about knowledge sharing. And we had plenty of activities in the last year.

Shane Hastie: Serious activities designed to share knowledge, increased knowledge, increase confidence. Tell us some of the specific events that you’ve had in that community.

Neria Yashar: So as I mentioned before, we had many ideas, so we had to choose some of them. I’ll tell you a few.

Examples of community events [07:05]

So we had a leadership panel of women leaders from Wix, and the audience gave us the questions and then we asked the leaders. That was really interesting. And we had round tables to mingle and to think about future ideas of the community. In that meeting, we also had Hila Fish, which is a bit famous in Israel. She is really into public speaking and she did a lecture for us for public speaking. She’s also working in Wix. So that was really easy.

We did, IAmRemarkable workshop. Me and my partner were the facilitators, so it was nice. We had a festive lunch together in one of Wix’s restaurants and that really things that we do for networking. We had a personal branding talk with Morad Stern, which is the head of engineering branding.

Last example, so just a week ago we had the final event of a course we got into Wix. The course is called Women on Stage, and it’s run by Moran Weber. And we took a few very, very promising engineers from Wix, and they were in the course, it’s a course about public speaking, and we did an event of Ted talks that each graduate came and talked for 10 minutes or so about a subject that is close to her heart that she’s really into that. So it was really fascinating and it was fully booked and I really enjoyed it. So these are just a few and we have so much planned ahead for the next year. So it’s a great thing to have.

Shane Hastie: Certainly does. Some really interesting events in there. And you make the point that it’s not a hobby. This is something that you’re doing as part of your day job. So thinking about organizations who want to support this sort of thing, who maybe don’t already have programs like this in place, what would they need to do to establish that?

Ways to help establish these communities in organisations [09:10]

Neria Yashar: First of all, the employees, the engineers, they can feel if you take it seriously or not. Like if you’re saying it’ll be in 5:00 PM after hours and you’ll need to pay for things and you’ll come and you’ll need to pay for the food or whatever. So it feels like it’s not really important. So we have a budget that Wix is giving us because they think it’s a really important goal, and we do it always at 10:00 AM. We want you to miss two hours of your job. It’s really important. We think it’s so important that you can have a pause and stop working for two hours and come and listen to lectures and meet other women. So I think for other companies I would say, get the budget and do it in the job hours and publish it and make it an official community. It’s not like a hobby or it’s not something fun, it’s something serious. It’s something that we want to do because we want to promote women and we want to give them the feeling that they’re really important for us.

Shane Hastie: You’ve made a point of the disparity and you said there were 18% in your engineering degree, and I think you mentioned 20 to 30% women in R&D in general. This is obviously not a good situation for our industry. We know all of the research about diverse teams are better and healthier. We get better outcomes and so forth. Why does this problem still persist? We’ve known about it for decades.

Addressing the imbalances will take time [10:47]

Neria Yashar: Yes, it’s an issue that we cannot solve in one week or one month or one year. It’s an issue that will take decades to solve. And although we are aware of that and we want to close this gap, it’s not that simple. So I think the main reason that we have this situation is the education that we give to our kids. When you look at the way that people raise boys and the way that they raise girls, it’s a different way, although we may not be aware of that, but girls, we always tell them, “You don’t need to do something which might be difficult and you need to be a good girl.” It’s a very common phrase, be a good girl, do whatever you’re told. And we don’t say that to boys. So when girls grow up, they think, “I should be so polite, I should be so nice.”

And then they don’t come and they don’t demand high salaries and they don’t think that they deserve the promotion. And then when they need to choose a subject in high school, maybe technology, maybe computer science, it’s too much. It’s for boys. So even in high school they don’t go to that direction. Researchers show that there are many young women that go and learn the STEM subjects, the science and math and engineering. But later on we see less and less women in the engineering degrees and in our engineering positions at work. And of course as you go higher, less managers. There are almost no CEOs, which are women so that’s a problem. I think it’s basically starts with education.

Shane Hastie: You mentioned you’re a mother of two. How are you raising your children?

Neria Yashar: So unfortunately. No, I’m kidding. I have two boys. It’s not unfortunately because they are perfect for me. But I think my sons, although they’re not girls that I can raise as a strong independent women, when they see us, me and my husband at home, they see two parents that are working equally and we both really care about our careers and we work many, many hours. So I think it’s really important that I set an example for them to see that women, that their job is as important as their husband’s job. And women can bring a lot of money home and they can be really successful and very strong. And I think my example for them will make them a very, very respectful men that will see women as the way that they should see them.

Shane Hastie: What else can we do to address the biases that we know are built into the systems?

Awareness is the starting point to addressing the imbalances [13:36]

Neria Yashar: I think the first thing that we need to do, as I mentioned, it’s not something that we can solve instantly, but I think the first thing that has to happen is that men and women must have awareness of the things that we do. Because I can tell you as women that if you would ask me 10 years ago, I would say that there’s nothing wrong. Women can choose to go and do whatever they want, but as I grow up and read more about things, I realize that I have some problems and issues with the way that I behave. And I think most women do. Sheryl Sandberg, she wrote a book, Lean In. It’s a pretty famous book, and one of her examples is she has a chapter about a sit at the table. When we go to lectures to meet-up to important meetings, we usually sit, even physically, we sit in a place which is not central.

And we often don’t speak out and don’t say our ideas and thoughts because we might come across as, I don’t know, not perfect, or it might be a stupid idea, so maybe I shouldn’t say that. But men, they don’t think like that. So I think it’s really important for us, for men and for women to be aware of the differences and the awareness is the first step for the solution. And imagine that you are an important manager in a tech company and you need to give someone a promotion. So statistically, men will come and say, “I want this promotion. I want to be promoted, I want to have more responsibility.” And women, even if they do want that promotion, they will feel maybe embarrassed to come and say that. Maybe they would feel that they don’t deserve it, although they do and they are professional the same way the men are. So if this manager will be aware of the difference, he might come up to these women and tell them, “What do you think? Do you want this promotion? Are you thinking about the promotion?” And that can make a real life difference and change in the women’s life.

Shane Hastie: So that’s one example of how a manager can make a concrete difference. What are some other ways that men can be allies?

Standing up as allies [15:56]

Neria Yashar: So first of all, I do see a movement of men starting to be aware and to be pro-women, which is a wonderful thing. Another way that men can be allies is maybe to support and to try to close the gap. I’ll give other examples. We said before that women sometimes might hesitate about saying their opinions. So I’ve seen men in Wix, my manager, he can come and say, “Okay, I want everyone to say their opinions, not just the two engineers that speak a lot. I want to hear everyone’s opinion.” And he does around the table and he asks everyone to say something. So my manager, he ask everyone to say their opinions, and he always gives the feeling that everything you say is really important and everything is valid.

So when you know that you can say everything and this meeting is a safe place, I think it’s really important and women will feel more likely to speak out and say what they think. And I think also when men are aware of the differences so they can help us even with salaries, maybe I’m optimistic, but I think that when you’re thinking about your wife, which is maybe embarrassed to ask for a raise, then you might be nice to the engineer that is working with you and tell her, “Maybe you should get a raise. Maybe you should ask for a raise.” I think being aware of that and helping other women is really an important step for us to do.

Shane Hastie: As an engineer in R&D, what’s your day? Tell me about your day.

Neria Yashar: Oh, my day. So working in Wix, Wix is a really wonderful company and it really focuses on the developer and gives the developer a lot of responsibility. The company really trusts each developer to take its own pet to production, to think end to end. So it’s really all about ownership. I would say the main steps of my job is that I get a project and then I do some research and write a plan for that project. I think about a design and architect of solution of solving this problem. And of course I always present it to others. I like to get other people’s opinions. Maybe I’ll show it to my team to see if they have some other ideas. And it always really helps me to think of new ways to solve, or maybe they remind me of things that I forgot. I’m really into teamwork. I really love it. So then I have the design and I need to break it into small tasks, prioritizing the task, and we need to set timelines for everything.

And of course, while we’re solving everything we need to think about testing performance, privacy, all the things that software engineers do. And as I said, I really love working with people. So all this time I’m collaborating with other teams and with my product manager. And at the end, of course, when I’m done, my code is always perfect and bug free. No, no, not really.

Shane Hastie: If only. If only.

So you talk there about enjoying working in teams and certainly we see the team often as the primary source of value in organizations today. What makes great teamwork?

What makes great teamwork? [19:32]

Neria Yashar: Having a great teamwork, I think it basically comes down to the people that you’re working with. So in Wix, I hope it’s okay to say, but we have a phrase that we say, “No assholes.” You don’t want to work with someone who is annoying or someone who is disrespectful. So I think when you’re walking in a safe environment where everyone can speak out their minds, that makes a really, really great teamwork. And also helping each other and supporting and reviewing each other’s code and helping each other with the design and sharing the progress with each other. So that really helps me feel like I’m part of a team. Of course, everyone has their own projects and JIRA tickets, but at the end, we have one goal that we want to reach, and the project is under one umbrella. So I think it’s really nice to see how all those small tasks are coming together and we’re working together as a team and then reaching our main goal of the project. So if I would have to say the most important thing is be nice and care about each other.

Shane Hastie: That’s good advice. Be nice and care about each other.

So Neria, before we wrap up today, is there anything else you’d like to say or anything else you’d like to add at this point?

Neria Yashar: Yes, Shane. I will say just that from my experience in the last year, I think sometimes when you think about doing something but it’s outside of your comfort zone and you might hesitate about that, I think it’s really great to be proactive about things that you really care about. And I would personally recommend doing that. For me, establishing the community was such a thing that brought me joy, and I’m so glad and proud that I did that. Although I’m not really the kind of person to do those things. And I think I’m so grateful that Wix and other companies, they understand the importance of having diversity in our teams and helping women promote those topics.

Shane Hastie: And Neria, really powerful thoughts and points through here. If people wanted to continue the conversation, where would they find you?

Neria Yashar: First of all, feel free to reach out and to ask for advice or anything else in my LinkedIn, Neria Poria Yashar. If you want to talk about founding and establishing communities or pro-women activities or anything else. So feel free to reach out. I always love to talk with new people.

Shane Hastie: We’ll make sure that your LinkedIn profile is in the show notes. Thank you so much for taking the time to talk to us today.

Neria Yashar: Thank you Shane.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Decoding MongoDB’s Options Activity: What’s the Big Picture? – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Loading…
Loading…

Whales with a lot of money to spend have taken a noticeably bullish stance on MongoDB.

Looking at options history for MongoDB MDB we detected 62 trades.

If we consider the specifics of each trade, it is accurate to state that 33% of the investors opened trades with bullish expectations and 29% with bearish.

From the overall spotted trades, 39 are puts, for a total amount of $2,465,602 and 23, calls, for a total amount of $1,239,111.

Projected Price Targets

Taking into account the Volume and Open Interest on these contracts, it appears that whales have been targeting a price range from $190.0 to $400.0 for MongoDB over the last 3 months.

Volume & Open Interest Development

Looking at the volume and open interest is an insightful way to conduct due diligence on a stock.

This data can help you track the liquidity and interest for MongoDB’s options for a given strike price.

Below, we can observe the evolution of the volume and open interest of calls and puts, respectively, for all of MongoDB’s whale activity within a strike price range from $190.0 to $400.0 in the last 30 days.

Loading…
Loading…

MongoDB Call and Put Volume: 30-Day Overview

Significant Options Trades Detected:

Symbol PUT/CALL Trade Type Sentiment Exp. Date Ask Bid Price Strike Price Total Trade Price Open Interest Volume
MDB PUT TRADE NEUTRAL 12/19/25 $58.75 $54.25 $56.43 $290.00 $169.2K 254 152
MDB PUT TRADE NEUTRAL 12/19/25 $57.75 $52.5 $55.15 $290.00 $165.4K 254 42
MDB PUT SWEEP BEARISH 06/21/24 $32.8 $31.75 $32.15 $335.00 $164.4K 67 103
MDB CALL SWEEP NEUTRAL 06/07/24 $6.65 $5.9 $6.65 $352.50 $164.3K 1 249
MDB CALL SWEEP BULLISH 06/07/24 $20.95 $20.95 $20.95 $310.00 $155.0K 5 99

About MongoDB

Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users. MongoDB provides both licenses as well as subscriptions as a service for its NoSQL database. MongoDB’s database is compatible with all major programming languages and is capable of being deployed for a variety of use cases.

Where Is MongoDB Standing Right Now?

  • Currently trading with a volume of 1,690,424, the MDB’s price is down by -5.53%, now at $315.52.
  • RSI readings suggest the stock is currently may be oversold.
  • Anticipated earnings release is in 0 days.

What The Experts Say On MongoDB

A total of 2 professional analysts have given their take on this stock in the last 30 days, setting an average price target of $467.5.

  • In a cautious move, an analyst from Needham downgraded its rating to Buy, setting a price target of $465.
  • Maintaining their stance, an analyst from B of A Securities continues to hold a Buy rating for MongoDB, targeting a price of $470.

Options trading presents higher risks and potential rewards. Astute traders manage these risks by continually educating themselves, adapting their strategies, monitoring multiple indicators, and keeping a close eye on market movements. Stay informed about the latest MongoDB options trades with real-time alerts from Benzinga Pro.

Loading…
Loading…

Market News and Data brought to you by Benzinga APIs

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Artisan Partners Limited Partnership Buys New Shares in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Artisan Partners Limited Partnership acquired a new stake in MongoDB, Inc. (NASDAQ:MDBFree Report) in the 4th quarter, according to the company in its most recent 13F filing with the SEC. The firm acquired 25,793 shares of the company’s stock, valued at approximately $10,545,000.

Other hedge funds also recently modified their holdings of the company. Blue Trust Inc. grew its holdings in MongoDB by 937.5% during the fourth quarter. Blue Trust Inc. now owns 83 shares of the company’s stock worth $34,000 after purchasing an additional 75 shares during the period. Huntington National Bank lifted its position in MongoDB by 279.3% during the third quarter. Huntington National Bank now owns 110 shares of the company’s stock worth $38,000 after acquiring an additional 81 shares during the last quarter. Parkside Financial Bank & Trust lifted its position in MongoDB by 38.3% during the third quarter. Parkside Financial Bank & Trust now owns 130 shares of the company’s stock worth $45,000 after acquiring an additional 36 shares during the last quarter. Beacon Capital Management LLC lifted its position in MongoDB by 1,111.1% during the fourth quarter. Beacon Capital Management LLC now owns 109 shares of the company’s stock worth $45,000 after acquiring an additional 100 shares during the last quarter. Finally, Raleigh Capital Management Inc. lifted its position in MongoDB by 156.1% during the third quarter. Raleigh Capital Management Inc. now owns 146 shares of the company’s stock worth $50,000 after acquiring an additional 89 shares during the last quarter. Institutional investors own 89.29% of the company’s stock.

Wall Street Analysts Forecast Growth

MDB has been the subject of several research reports. Bank of America cut their price objective on shares of MongoDB from $500.00 to $470.00 and set a “buy” rating for the company in a research report on Friday, May 17th. Truist Financial lifted their price target on shares of MongoDB from $440.00 to $500.00 and gave the company a “buy” rating in a research report on Tuesday, February 20th. Tigress Financial lifted their price target on shares of MongoDB from $495.00 to $500.00 and gave the company a “buy” rating in a research report on Thursday, March 28th. Monness Crespi & Hardt upgraded shares of MongoDB to a “hold” rating in a research report on Tuesday. Finally, DA Davidson upgraded shares of MongoDB from a “neutral” rating to a “buy” rating and lifted their price target for the company from $405.00 to $430.00 in a research report on Friday, March 8th. Two research analysts have rated the stock with a sell rating, four have assigned a hold rating and twenty have assigned a buy rating to the stock. According to MarketBeat.com, the company currently has a consensus rating of “Moderate Buy” and a consensus price target of $444.57.

Read Our Latest Report on MDB

MongoDB Stock Performance

Shares of MongoDB stock traded down $15.40 on Thursday, hitting $318.59. 990,014 shares of the company traded hands, compared to its average volume of 1,296,847. The firm’s 50-day simple moving average is $356.71 and its 200 day simple moving average is $392.33. The company has a debt-to-equity ratio of 1.07, a quick ratio of 4.40 and a current ratio of 4.40. MongoDB, Inc. has a 12-month low of $275.76 and a 12-month high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Thursday, March 7th. The company reported ($1.03) earnings per share for the quarter, missing the consensus estimate of ($0.71) by ($0.32). The company had revenue of $458.00 million for the quarter, compared to analysts’ expectations of $431.99 million. MongoDB had a negative net margin of 10.49% and a negative return on equity of 16.22%. On average, analysts expect that MongoDB, Inc. will post -2.53 EPS for the current year.

Insider Buying and Selling

In related news, Director Dwight A. Merriman sold 4,000 shares of MongoDB stock in a transaction on Wednesday, April 3rd. The shares were sold at an average price of $341.12, for a total value of $1,364,480.00. Following the transaction, the director now owns 1,156,784 shares in the company, valued at approximately $394,602,158.08. The sale was disclosed in a filing with the SEC, which is available through the SEC website. In other MongoDB news, Director Dwight A. Merriman sold 1,000 shares of the firm’s stock in a transaction on Wednesday, May 1st. The shares were sold at an average price of $379.15, for a total value of $379,150.00. Following the sale, the director now owns 522,896 shares of the company’s stock, valued at approximately $198,256,018.40. The sale was disclosed in a legal filing with the SEC, which is available through this hyperlink. Also, Director Dwight A. Merriman sold 4,000 shares of the firm’s stock in a transaction on Wednesday, April 3rd. The stock was sold at an average price of $341.12, for a total transaction of $1,364,480.00. Following the completion of the sale, the director now directly owns 1,156,784 shares in the company, valued at approximately $394,602,158.08. The disclosure for this sale can be found here. Insiders have sold a total of 46,802 shares of company stock worth $16,514,071 in the last 90 days. 3.60% of the stock is owned by company insiders.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Beginner's Guide to Pot Stock Investing Cover

Click the link below and we’ll send you MarketBeat’s guide to pot stock investing and which pot companies show the most promise.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Bought by PNC Financial Services Group Inc.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

PNC Financial Services Group Inc. increased its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 11.0% during the fourth quarter, according to its most recent filing with the Securities & Exchange Commission. The fund owned 2,746 shares of the company’s stock after acquiring an additional 272 shares during the quarter. PNC Financial Services Group Inc.’s holdings in MongoDB were worth $1,123,000 at the end of the most recent quarter.

Other institutional investors and hedge funds also recently modified their holdings of the company. Dynamic Technology Lab Private Ltd increased its stake in MongoDB by 36.8% in the 4th quarter. Dynamic Technology Lab Private Ltd now owns 1,183 shares of the company’s stock worth $484,000 after purchasing an additional 318 shares in the last quarter. Norges Bank acquired a new stake in shares of MongoDB during the fourth quarter valued at approximately $326,237,000. New Century Financial Group LLC acquired a new stake in shares of MongoDB during the fourth quarter valued at approximately $228,000. Fiera Capital Corp raised its holdings in shares of MongoDB by 0.8% during the fourth quarter. Fiera Capital Corp now owns 224,293 shares of the company’s stock valued at $91,702,000 after acquiring an additional 1,695 shares during the period. Finally, Princeton Capital Management LLC raised its holdings in shares of MongoDB by 3.2% during the fourth quarter. Princeton Capital Management LLC now owns 4,356 shares of the company’s stock valued at $1,781,000 after acquiring an additional 134 shares during the period. 89.29% of the stock is owned by institutional investors and hedge funds.

Insider Buying and Selling at MongoDB

In other MongoDB news, CRO Cedric Pech sold 1,430 shares of the stock in a transaction on Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total value of $497,797.30. Following the completion of the sale, the executive now directly owns 45,444 shares of the company’s stock, valued at approximately $15,819,510.84. The sale was disclosed in a legal filing with the SEC, which is available through the SEC website. In other MongoDB news, CRO Cedric Pech sold 1,430 shares of the stock in a transaction on Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total value of $497,797.30. Following the completion of the sale, the executive now directly owns 45,444 shares of the company’s stock, valued at approximately $15,819,510.84. The sale was disclosed in a legal filing with the SEC, which is available through the SEC website. Also, Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction on Monday, April 1st. The shares were sold at an average price of $363.01, for a total transaction of $363,010.00. Following the completion of the sale, the director now directly owns 523,896 shares of the company’s stock, valued at $190,179,486.96. The disclosure for this sale can be found here. Insiders sold a total of 46,802 shares of company stock valued at $16,514,071 in the last ninety days. Company insiders own 3.60% of the company’s stock.

Analyst Upgrades and Downgrades

A number of equities research analysts have recently issued reports on MDB shares. DA Davidson upgraded shares of MongoDB from a “neutral” rating to a “buy” rating and boosted their price target for the company from $405.00 to $430.00 in a report on Friday, March 8th. Needham & Company LLC restated a “buy” rating and set a $465.00 price target on shares of MongoDB in a report on Friday, May 3rd. KeyCorp cut their price target on shares of MongoDB from $490.00 to $440.00 and set an “overweight” rating on the stock in a report on Thursday, April 18th. Loop Capital started coverage on shares of MongoDB in a report on Tuesday, April 23rd. They set a “buy” rating and a $415.00 price target on the stock. Finally, Redburn Atlantic restated a “sell” rating and set a $295.00 price target (down previously from $410.00) on shares of MongoDB in a report on Tuesday, March 19th. Two investment analysts have rated the stock with a sell rating, three have assigned a hold rating and twenty have given a buy rating to the stock. Based on data from MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and a consensus target price of $444.57.

Read Our Latest Analysis on MDB

MongoDB Price Performance

NASDAQ MDB opened at $333.99 on Thursday. The company has a current ratio of 4.40, a quick ratio of 4.40 and a debt-to-equity ratio of 1.07. The company has a fifty day simple moving average of $356.71 and a 200 day simple moving average of $392.33. The company has a market cap of $24.32 billion, a P/E ratio of -134.67 and a beta of 1.19. MongoDB, Inc. has a 1 year low of $275.76 and a 1 year high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Thursday, March 7th. The company reported ($1.03) earnings per share (EPS) for the quarter, missing the consensus estimate of ($0.71) by ($0.32). The firm had revenue of $458.00 million for the quarter, compared to the consensus estimate of $431.99 million. MongoDB had a negative return on equity of 16.22% and a negative net margin of 10.49%. On average, sell-side analysts predict that MongoDB, Inc. will post -2.53 EPS for the current year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Sold by Commerce Bank – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Commerce Bank lowered its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 6.3% during the fourth quarter, according to its most recent disclosure with the Securities and Exchange Commission. The fund owned 2,006 shares of the company’s stock after selling 136 shares during the quarter. Commerce Bank’s holdings in MongoDB were worth $820,000 as of its most recent filing with the Securities and Exchange Commission.

A number of other hedge funds and other institutional investors have also recently added to or reduced their stakes in MDB. Yousif Capital Management LLC lifted its stake in shares of MongoDB by 4.8% during the third quarter. Yousif Capital Management LLC now owns 762 shares of the company’s stock worth $264,000 after purchasing an additional 35 shares in the last quarter. Private Advisor Group LLC lifted its stake in shares of MongoDB by 37.0% during the third quarter. Private Advisor Group LLC now owns 3,716 shares of the company’s stock worth $1,285,000 after purchasing an additional 1,003 shares in the last quarter. Mitsubishi UFJ Kokusai Asset Management Co. Ltd. lifted its stake in shares of MongoDB by 7.4% during the third quarter. Mitsubishi UFJ Kokusai Asset Management Co. Ltd. now owns 47,479 shares of the company’s stock worth $16,421,000 after purchasing an additional 3,258 shares in the last quarter. abrdn plc lifted its stake in shares of MongoDB by 46.0% during the third quarter. abrdn plc now owns 22,803 shares of the company’s stock worth $7,887,000 after purchasing an additional 7,188 shares in the last quarter. Finally, Arizona State Retirement System lifted its stake in shares of MongoDB by 0.7% during the third quarter. Arizona State Retirement System now owns 19,256 shares of the company’s stock worth $6,660,000 after purchasing an additional 142 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

Analyst Upgrades and Downgrades

MDB has been the topic of a number of recent research reports. Loop Capital initiated coverage on MongoDB in a research note on Tuesday, April 23rd. They set a “buy” rating and a $415.00 price objective on the stock. Tigress Financial boosted their price objective on MongoDB from $495.00 to $500.00 and gave the stock a “buy” rating in a research note on Thursday, March 28th. Bank of America dropped their target price on MongoDB from $500.00 to $470.00 and set a “buy” rating on the stock in a research report on Friday, May 17th. Guggenheim lifted their target price on MongoDB from $250.00 to $272.00 and gave the stock a “sell” rating in a research report on Monday, March 4th. Finally, KeyCorp dropped their target price on MongoDB from $490.00 to $440.00 and set an “overweight” rating on the stock in a research report on Thursday, April 18th. Two investment analysts have rated the stock with a sell rating, four have assigned a hold rating and twenty have issued a buy rating to the company. Based on data from MarketBeat, MongoDB has a consensus rating of “Moderate Buy” and an average price target of $444.57.

Check Out Our Latest Research Report on MongoDB

MongoDB Stock Down 0.6 %

Shares of MDB stock opened at $333.99 on Thursday. The company has a debt-to-equity ratio of 1.07, a quick ratio of 4.40 and a current ratio of 4.40. The business has a 50 day moving average of $356.71 and a two-hundred day moving average of $392.33. MongoDB, Inc. has a 52 week low of $275.76 and a 52 week high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Thursday, March 7th. The company reported ($1.03) EPS for the quarter, missing analysts’ consensus estimates of ($0.71) by ($0.32). MongoDB had a negative net margin of 10.49% and a negative return on equity of 16.22%. The company had revenue of $458.00 million during the quarter, compared to analyst estimates of $431.99 million. As a group, equities analysts predict that MongoDB, Inc. will post -2.53 earnings per share for the current fiscal year.

Insider Transactions at MongoDB

In other news, CAO Thomas Bull sold 170 shares of the business’s stock in a transaction that occurred on Tuesday, April 2nd. The stock was sold at an average price of $348.12, for a total transaction of $59,180.40. Following the completion of the transaction, the chief accounting officer now directly owns 17,360 shares in the company, valued at $6,043,363.20. The sale was disclosed in a legal filing with the SEC, which is accessible through this link. In other news, CRO Cedric Pech sold 1,430 shares of the business’s stock in a transaction that occurred on Tuesday, April 2nd. The stock was sold at an average price of $348.11, for a total transaction of $497,797.30. Following the completion of the transaction, the executive now directly owns 45,444 shares in the company, valued at $15,819,510.84. The sale was disclosed in a legal filing with the SEC, which is accessible through this link. Also, CAO Thomas Bull sold 170 shares of the business’s stock in a transaction that occurred on Tuesday, April 2nd. The stock was sold at an average price of $348.12, for a total value of $59,180.40. Following the transaction, the chief accounting officer now owns 17,360 shares of the company’s stock, valued at $6,043,363.20. The disclosure for this sale can be found here. Over the last three months, insiders sold 46,802 shares of company stock worth $16,514,071. Company insiders own 3.60% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET 8+ on Ubuntu 24.04: Official Release with Collaborative Support

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Ubuntu 24.04 has launched with a .NET release available from day one in the official Ubuntu feeds, making it immediately usable. Container images for .NET 8+ are available, including noble, noble-chiseled, and noble-chiseled-extra flavors. Additionally, .NET 6 and 7 are accessible through the dotnet/backports repository. Microsoft and Canonical are collaborating on servicing and support, ensuring simultaneous availability of .NET fixes.

For the first time, a .NET release is available from day one in the official Ubuntu feeds. Previously, .NET 6 was added to Ubuntu 22.04 a few months post-release. From Ubuntu 24.04 onwards, Ubuntu feeds are the official source of .NET packages. The .NET installation documentation has been updated accordingly. Ubuntu 24.04 container images for .NET 8+ are now available, including noble, noble-chiseled, and noble-chiselled-extra image flavours.

In order to install .NET 8 on Ubuntu 24.04:

$ sudo apt update && sudo apt install -y dotnet-sdk-8.0

Below the official post, a question appeared asking if there is any plan to have this easy way to install/update NET8+ on Raspberry Pis. Richard Lander, a product manager at Microsoft, answered:

I usually just curl and tar the tar.gz from the download page.
The install scripts also work great.
Separately, we’ve talked to a few different folks about getting .NET into Debian (which would help with Raspberry Pi OS). We haven’t been able to make that happen. It is certainly something that would be very nice.
Important: .NET 8 works on Debian Arm32. It doesn’t work on Ubuntu 24.04 Arm32 – context.

.NET 6 and 7 are available in the Ubuntu .NET backports package repository, maintained by Canonical. To install .NET 6 using the dotnet/backports repository:

$ sudo add-apt-repository ppa:dotnet/backports
$ sudo apt install -y dotnet-sdk-6.0

.NET 7 can be installed using the same pattern, with the dotnet/backports repository only needing to be registered once.

Another question was regarding the possibility of installing .NET 6 & .NET 8 side-by-side. Thomas Glaser mentioned that he attempted to add the MS repository and install both, but got a bunch of conflicts somehow. The answer was the following: 

I recommend not using the MS repository. We’ve come to the conclusion that Microsoft and Canonical both publishing the same packages doesn’t work. Your experience is evidence of that, sadly. Here’s a doc that describes how to resolve that.
The post describes how to install .NET 8 (from the archives) and .NET 6 (from a PPA).

Microsoft and Canonical are collaborating on servicing and support. Microsoft provides security and functional fixes to Canonical ahead of Patch Tuesday releases via a private channel, allowing time for building and testing. A similar process is followed with Red Hat. The goal is to ensure that .NET fixes are available simultaneously everywhere.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to Scale Agile Software Development with Technology and Lean

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Agile software development can be done at scale with the use of technology like self-service APIs, infrastructure provisioning, real-time collaboration software, and distributed versioning systems. Lean can complement and scale an agile culture with techniques like obeyas, systematic problem-solving, one-piece-flow and takt time, and kaizen. Fabrice Bernhard spoke about how their company uses technology with lean thinking for doing agile software development at scale at FlowCon France.

The agile manifesto doesn’t apply to large organizations, Bernhard stated. Leaders looking for principles to keep their culture agile while scaling their software organization will need to look elsewhere. And unfortunately that “elsewhere” is now crowded with options called “agile at scale”, many of which are very bureaucratic and therefore not in the spirit of the agile manifesto, he mentioned.

Agile can scale, Bernhard said; there are many examples of organizations that scaled while maintaining an agile culture. In the body of knowledge of lean thinking, they found the principles they were looking for to scale their organization while staying true to the agile manifesto.

In the book The Lean Tech Manifesto that Bernhard wrote with Benoît Charles-Lavauzelle, he explores principles, systems, and tools that lean thinking provides to extend the principles of the agile manifesto. He mentioned some examples:

  • Value models, obeyas, and value streams, to scale “Customer Collaboration” by ensuring “Value for the Customer”’ becomes the North Star of the whole organization
  • Systematic problem-solving with PDCA and 5S, supported by team leaders and enabled in our digital world by collaboration technology, to scale “individuals and interaction” and transform the organization into a “tech-enabled network of teams”
  • Jidoka, dantotsu, poka-yoke, pull, one-piece-flow and takt time, to implement “right-first-time and just-in-time” and scale “working software”
  • Standards, kaizen, skills matrix and communities of practice, to scale “responding to change” with “building a learning organization”

Bernhard mentioned that they felt that lean thinking didn’t fully explain how some large agile organizations were succeeding. He decided to explore how the Linux open-source project and its community scaled from 1 to 55,000 contributors, where they used technology to address the scaling issues that they faced along the way:

The first scaling crisis happened in 1996, when Linus wrote that he was “buried alive in emails”. It was addressed by adopting a more modular architecture, with the introduction of loadable kernel modules, and the creation of the maintainers role, who support the contributors in ensuring that they implement the high standards of quality needed to merge their contributions.

The second scaling crisis lasted from 1998 to 2002, and was finally addressed by the adoption of BitKeeper, later replaced by Git. This distributed the job of merging contributions across the network of maintainers and contributors.

In both cases, technology was used to reduce the amount of dependencies between teams, help contributors keep a high level of autonomy, and make it easy to merge all those contributions back into the main repository, Bernhard said.

Technology can help reduce the need to communicate between teams whenever they have a dependency on another team to get their work done. Typical organizational dependencies, such as when a team relies on another team’s data, can be replaced by self-service APIs using the right technologies and architecture, Bernhard mentioned. This can be extended to more complicated dependencies, such as infrastructure provisioning, as AWS pioneered when they invented EC2, offering self-service APIs to spin up virtual servers, he added.

Another type of dependency is dealing with the challenge of merging contributions made to a similar document, whether it’s an illustration, a text, or source code, Bernhard mentioned. This has been transformed in the last 15 years by real-time collaboration software such as Google Docs and distributed versioning systems such as Git, he said.

Bernhard mentioned that he learned a lot from how the Linux community addressed its scaling issues. And where the first agile methodologies, such as Scrum or XP, focus on a single team of software engineers, lean thinking has been battle-tested at scale for decades in very large organizations, Bernhard said. Anyone trying to scale an agile organization should study lean thinking to benefit from decades of experience on how to lead large organizations while staying true to the spirit of the agile manifesto, he concluded.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.