Month: June 2023

MMS • Steef-Jan Wiggers
At the recent annual Build conference, Microsoft introduced the preview of Microsoft Azure API Center – a new Azure service and a part of the Azure API Management platform that enables tracking APIs in a centralized location for discovery, reuse, and governance.
With API Center, users can access a central hub to discover, track, and manage all APIs within their organization, fostering company-wide API standards and promoting reuse. In addition, it facilitates collaboration between API program managers, developers who discover and consume APIs to accelerate or enable application development, API developers who create and publish APIs, and other stakeholders involved in API programs.
Source: https://github.com/Azure/api-center-preview
The key capabilities of API Center include:
- API inventory management centralized the collection of all APIs within an organization. These APIs can vary in type (such as REST, GraphQL, gRPC), lifecycle stage (development, production, deprecated), and deployment location (Azure cloud, on-premises data centers, other clouds).
- Real-world API representation provides detailed information about APIs, including their versions, specifications, deployments, and the environments in which they are deployed.
- Metadata properties enhance governance and discoverability by organizing and enriching cataloged APIs, environments, and deployments with unified built-in and custom metadata throughout the entire asset portfolio.
- Workspaces, allowing management of administrative access to APIs and other assets with role-based access control.
Regarding the API Inventory management, Fernando Mejia, a senior program manager, said during a Microsoft Build session on APIs:
With API Center, you can bring your APIs from Apigee, such as AWS API Gateway or MuleSoft API Management. The idea is to have a single inventory for all of your APIs.
Mike Budzynski, a senior product manager at Azure API Management at Microsoft, explained to InfoQ the release roadmap of the API Center feature:
API Center is currently in preview for evaluation by Azure customers. At first, we are limiting access by invitation only. We plan to open it up for broader adoption in the fall.
In addition, in a tech community blog post, Budzynski wrote:
During the preview, API Center is available free of charge. Future releases will add developer experiences, improving API discovery and reusability, and integrations simplifying API inventory onboarding.
Microsoft currently allows a limited set of customers to access API Center through a request form.

MMS • Anton Skornyakov

Key Takeaways
- We all carry some developmental trauma that makes it difficult for us to collaborate with others – a crucial part of work in agile software development.
- Leading in a trauma-informed way is not practicing unsolicited psychotherapy, and it is not justifying destructive behaviors without addressing them.
- Being more trauma-informed in your leadership can help everyone act more maturely and cognitively available, especially in emotionally challenging situations.
- In trauma-informed working environments, people pay more attention to their physical and emotional state.
- They also rely more on the power of intention, set goals in a less manipulative manner, and are able to be empathetic without taking ownership of others’ problems.
In recent decades, scientific and clinical understanding of how the human nervous system develops and works has increased tremendously. Its implications are so profound they radiate far beyond the field of psychology. Topics such as trauma-informed law, volleyball coaching, legal counseling, education, and social activism have arisen.
It is time to consider how it affects working in an agile tech environment.
Defining “trauma-informed” work
Working in a trauma-informed manner means professionally conducting whatever you set out to do while taking into account different forms of trauma that humans you work with are affected by.
It is not practicing unsolicited psychotherapy, and it is not justifying destructive behaviors without addressing them.
This means different things when you do trauma-informed legal counseling, volleyball coaching, or agile coaching.
An everyday example for an agile coach would be to notice the shallow rapid breathing of participants at the start of a meeting and to invite them to briefly take three long breaths and for 20 seconds to reflect on what they each want to accomplish in this meeting and summarize that in one sentence.
Traumatic patterns make collaboration difficult
Let’s look at a typical example of a team member moving from individual responsibility for tasks to sharing in team responsibility for done increments. The reactions triggered by such a change vary strongly depending on the person affected. They could just be happy and enthusiastic about new opportunities. However, they could also experience spiraling self-doubt, become subliminally aggressive, perfectionistic and distrustful, withdraw themselves from most interactions, or become avoidant.
Any of the above reactions may be adequate for a short period in a particular situation. However, if they become a pattern that doesn’t resolve itself, it harms everyone involved. Such patterns typically originate from traumatic experiences we’ve had.
When such a pattern is triggered, our attention gets stuck within. We may think and overthink whether we are allowed to speak up and, if so, what words we can use. We may search for tricky ways to stop the change or to completely disconnect ourselves from what is changing around us.
Whatever the pattern is, once it is triggered, we pay less attention to what is actually happening – the reality, but are more preoccupied with ourselves. Dealing with these internal patterns can take up a large portion of our cognitive and emotional resources. We act less like mature adults. This makes finding a common way forward for us, our co-workers, and our leaders much harder.
Traumatic patterns used to serve us as kids but are harming us in adult life
There are different forms of trauma. Here I am not focusing on shock trauma, which typically arises from one or few particularly terrible situations. The patterns I describe usually originate from different forms of developmental trauma. These emerge when our environment systematically does not meet our basic needs as a child.
When this happens, we can’t do anything about it as children and adapt by changing what we think is normal in the world and us. We end up denying ourselves some part of being human. Paradoxically, this helps us a lot, as it dissolves a consistently hurtful dissonance.
Later, when we are confronted with this need in our adult life, we react from the altered idea of ourselves and the world. However, since what we are denying ourselves is an inevitable part of being human, we end up in an unending struggle. When we are in such an inner struggle, our capacity for using our conscious thinking and being empathic is strongly impaired.
Typical Examples from Tech Organizations
Since we are all, to some extent, affected by developmental trauma, you can find countless examples of its effects, small or large, in any organization. And as developmental trauma originates from relationships with other humans, its triggers are always somehow linked to individuals and interactions with them. Just think of colleagues with typical emotional patterns in interactions with you or notice your patterns in interactions with particular colleagues.
One pattern I’ve often observed with some software developers I worked with, and I am also familiar with myself, is perfectionism. The idea of delivering a result that is not perfect and imagining being made aware of a mistake I made is sometimes unbearable. And most of the time, it’s unconscious. I just always try to make something as perfect as I can imagine. This can make collaborating with other people very hard, they may not meet my standards for perfection, or I may fear being at the mercy of their high standards that I can’t fulfill.
Another such pattern is self-doubt, which manifests in the inability to express one’s wishes or opinions. In this pattern, the pain of others potentially seeing our statement as inappropriate or useless is so strong that we don’t even invest time into thinking about our own position. Again, this typically is unconscious, and it’s just how we are used to behaving. Overlooking a critical position can cause significant long-term damage to organizations. And almost always, another person in our place and with our knowledge would express similar concerns and wishes.
A trauma-informed approach to leading people in agile organizations
First of all, I would like to emphasize that we are still at the very beginning of professionalizing our work with respect to developmental trauma, and I would love to see many more discussions and contributions on these subjects.
I want to share how I changed my practice as an agile coach and trainer after completing basic training to become a trauma-informed (NARM®-informed professional). These insights come from understanding how professionals deal with trauma and how to deal with it without justifying destructive behavior or beating people over the head with their patterns.
Higher attention to physical and emotional states
Software is, by definition, very abstract. For this reason, we naturally tend to be in our heads and thoughts most of the time while at work. However, a more trauma-informed approach requires us to pay more attention to our physical state and not just to our brain and cognition. Our body and its sensations are giving us many signs, vital not just to our well-being but also to our productivity and ability to cognitively understand each other and adapt to changes. Paradoxically, in the end, paying more attention to our physical and emotional state gives us more cognitive resources to do our work.
Noticing our bodily sensations at the moment, like breath or muscle tension in a particular area, can be a first step to getting out of a traumatic pattern. And a generally higher level of body awareness can help us fall less into such patterns in the first place. Simplified – our body awareness anchors us in the here and now, making it easier for us to recognize past patterns as inadequate for the current situation.
One format that helps with this and is known to many agile practitioners is the Check-In in the style of the Core Protocols. I use it consequently when training and or conducting workshops and combine it with a preceding silent twenty seconds for checking in with ourselves on how we are physically feeling. It allows everyone to become aware of potentially problematic or otherwise relevant issues before we start. After such a Check-In, most groups can easily deal with any problems that might have seriously impeded the meeting if left unsaid. People are naturally quite good at dealing with emotional or otherwise complicated human things, provided they are allowed to show themselves.
The power and importance of intention
My second significant learning is that we need a deep respect for the person’s intention and an understanding of the power that can be liberated by following one’s intention.
For me, as a coach, this means when interacting with an organization, clarifying my client’s intention is a major and continuous part of my work. And it means supporting them in following their intention is more important than following my expert opinion. I should be honest and share my thoughts; however, it’s my clients journey which I am curiously supporting. I know this way, change will happen faster. It will be more durable and sustainable than if the client blindly followed my advice to adopt this or that practice.
In fact, clients who choose to blindly follow a potentially very respected consultant often reenact traumatic experiences from their child’s past. It is a different thing when organizational leaders are driven by their own intentions and are uncovering their paths faster and with more security due to an expert supporting them on their journey.
For the leadership in organizations that rightfully have their own goals and strategies, understanding the power of intention means leading with more invitations and relying more on volunteering. Instead of assigning work, they try to clarify what work and what projects need to be accomplished and allow people to choose for themselves. Even if something is not a person’s core discipline, they may decide to learn something new and be more productive in their assignment than a bored expert. This way of leading people requires more clarity on boundaries and responsibility than assigning work packages to individuals.
For all of us, respecting our own intentions and being aware of their power means looking for the parts of work that spark our curiosity or feel like fun to do and following them as often as we can.
I use this insight every time I deliver training. At the end of a typical two-day workshop, my participants will have an exhaustive list of things they want to try, look into, change, or think about. From my experience with thousands of participants, having such a long list of twenty items or more isn’t going to lead to any meaningful result. Most of the time, it’s just overwhelming, and people end up not doing any of the items on their lists. So at the end of each training, I invite my participant to take 5 minutes and scan through all their takeaways and learnings to identify 2 or 3 that spark joy when they think of applying them. Not only has this produced a tremendous amount of positive feedback, participants regularly report how relieving and empowering these 5 minutes are to them.
Set goals as changes of state, not changes in behavior
My third trauma-informed insight is that I became aware of an essential nuance when it comes to setting goals, a key discipline when it comes to leading agile organizations.
Often when we set a goal, we define it as a behavior change. Instead, a trauma-therapeutic practitioner will explore the state change the client believes this would bring.
For example, if someone says, “I want my developers to hit the deadline on our team.”
I might ask: “If your developers do start hitting the deadlines, how do you hope this will impact your leadership?”
The outcome of such a goal-setting conversation, the state someone wants their leadership or themselves to be in, is often a much more durable goal, and it’s typically more connected to the actual need of the person setting the goal. On the other hand, focusing on behavioral changes often leads to manipulation that doesn’t achieve what we really want.
The above example applies to an internal situation. However, looking for a change in the state of our customers is also a different conversation than looking into the behavioral change we want them to exhibit. Here the change in state is also a more stable, long-term goal.
Leadership topics benefiting from trauma-informed approaches
I believe that in organizational leadership, there is a lot more to learn from trauma-informed approaches, to count a few:
- Get a deeper understanding of the stance of responsibility in oneself and your co-workers and how to get there. In NARM®, this is called agency.
- Understand the difference between authentic empathy that supports someone in need and unmanaged empathy that overwhelms and disrupts relationships.
- Get a new relieving perspective on our own and others’ difficult behaviors.
Becoming trauma-informed in your daily work
I believe that you can only guide people to where you’ve been yourself. Familiarize yourself with the topic and start reflecting on your own patterns. You’ll automatically become aware of many moments that trauma plays a role in your work and will be able to find new ways to deal with it.
My journey started with listening to the “Transforming Trauma” podcast. If you like to read books, I’d recommend “The Body Keeps the Score” by Bessel van der Kolk or “When the Body Says No: The Cost of Hidden Stress” by Gabor Mate. However, the moment I truly started to reflect and apply trauma-informed practices was during the NARM®-basic training I completed last year. There is something very unique about it since it’s the first module of education for psychotherapists. Still, it is on purpose open to all other helping professionals, anyone working with humans. I’d recommend completing such a course to anyone serious about becoming trauma-informed.

MMS • RSS
Paypal recently open-sourced JunoDB, a distributed key-value store built on RocksDB. Daily, JunoDB, PayPal’s highly available and security-focused database, processes 350 billion requests.
PayPal’s wide variety of applications relies heavily on JunoDB, a distributed key-value store. JunoDB is used for virtually every critical backend service at PayPal, including authentication, risk assessment, and transaction settlement. Data may be cached and accessed quickly from apps using JunoDB, reducing the strain on backend services and relational databases. JunoDB, however, is not an ordinary NoSQL database. It was developed to meet PayPal’s specific requirements. Thus, it can simultaneously manage many concurrent users and connections without slowing down. Originally built in single-threaded C++, it has been rewritten in Golang to take advantage of parallel processing and many cores.
The JunoDB architecture is a dependable and extensible system that prioritizes ease of use, scalability, security, and flexibility. Proxy-based design simplifies development by abstracting away complex logic and setup from applications and allowing for linear horizontal connection scaling. When expanding or contracting clusters, JunoDB uses consistent hashing to split data and reduce the amount of data that must be moved. JunoDB uses a quorum-based protocol and a two-phase commit to guarantee data consistency and ensure there is never any downtime for the database.
Protecting information both in transit and at rest is a high priority. Hence TLS support and payload encryption are implemented. Finally, JunoDB’s flexibility and ability to adapt over time are guaranteed by its pluggable storage engine design, which makes it simple to switch to new storage technologies as they become available.
The core of JunoDB is made up of three interdependent parts:
- The JunoDB proxy allows application data to be easily stored, retrieved, and updated, thanks to the JunoDB client library’s provided API. With support for languages including Java, Golang, C++, Node, and Python, the JunoDB thin client library can be easily integrated with programs built in various languages.
- Client queries and replication traffic from remote sites are processed by proxy instances of JunoDB that are controlled by a load balancer. Each proxy establishes a connection to all JunoDB storage server instances, and routes request to a set of storage server instances according to the shard mapping stored in ETCD.
- JunoDB uses RocksDB to store data in memory or persistent storage upon receiving an operation request from a proxy.
JunoDB maintains high levels of accessibility and system responsiveness while supporting many client connections. It also manages data expansion and maintains high read/write throughput even as data volume and access rates rise. To achieve six 9s of system availability, JunoDB uses a mix of solutions, including data replication inside and outside data centers and failover mechanisms.
JunoDB provides exceptional performance at scale, managing even the most intensive workloads with response times in the millisecond range and without disrupting the user experience. In addition, JunoDB offers a high throughput and low latencies, enabling applications to scale linearly without compromising performance.
Users can get the source code for JunoDB, released under the Apache 2 license, on GitHub. PayPal produced server configuration and client development tutorial videos to aid developers’ database use. The team plans to include a Golang client and a JunoDB operator for Kubernetes in the future.
Check Out The Reference Article. Don’t forget to join our 23k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.

MMS • Andrew Hoblitzell

Nvidia’s new NeMo Guardrails package for large language models (LLMs) helps developers prevent LLM risks like harmful or offensive content and access to sensitive data. This innovation is crucial for developers as it offers multiple features to control the behavior of these models, thereby ensuring their safer deployment. Specifically, NeMo Guardrails helps mitigate the risks of LLMs generating harmful or offensive content thereby providing an essential layer of protection in an increasingly AI-driven landscape.
NeMo Guardrails helps developers to mitigate the risks associated with LLMs by providing a number of features to control the behavior of these models. The package is built on Colang, a modeling language and runtime developed by Nvidia for conversational AI. “If you have a customer service chatbot, designed to talk about your products, you probably don’t want it to answer questions about our competitors,” said Jonathan Cohen, Nvidia vice president of applied research. “You want to monitor the conversation. And if that happens, you steer the conversation back to the topics you prefer”.
NeMo Guardrails currently supports three broad categories: Topical, Safety, and Security. Topical guardrails ensure that conversations stay focused on a particular topic. Safety guardrails ensure that interactions with an LLM do not result in misinformation, toxic responses, or inappropriate content. They also enforce policies to deliver appropriate responses and prevent hacking of the AI systems. Security guardrails prevent an LLM from executing malicious code or calls to an external application in a way that poses security.
Guardrails features a sandbox environment, allowing developers the freedom to experiment with AI models without jeopardizing production systems, thus reducing the risk of generating harmful or offensive content. Additionally, a risk dashboard is provided, which consistently tracks and scrutinizes the use of AI models, assisting developers in identifying and mitigating potential risks before they lead to major issues. Moreover, it supplies a clear set of policies and guidelines designed to direct the usage of AI within organizations.
Reception has generally been positive about NeMo-Guardrails, but some have expressed caution around the limitations. There are certain limitations and constraints developers need to be aware of when using this LLM package. Karl Freund of Cambrian-AI Research writes “guardrails could be circumvented or otherwise compromised by malicious actors, who could exploit weaknesses in the system to generate harmful or misleading information”. Jailbreaks, hallucinations, and other issues also remain active research areas which no current system has implemented fullproof protection against.
Other tools also exist for safety when working with large language models. For example, Language Model Query Language (LMQL) is designed to make natural language prompting and is built on top of Python. Microsoft’s Guidance framework can also be used for addressing issues with LLMs not guaranteeing that output follows a specific data format.
Nvidia advises that Guardrails works best as a second line of defense, suggesting that companies developing and deploying chatbots should still train the model on a set of safeguards with multiple layers.

MMS • Chris Klug

Key Takeaways
- Project Orleans has been completely overhauled in the latest version, making it easy to work with. It has also been re-written to fit in with the new IHost abstraction that was introduced in .NET Core.
- The actor model is wonderful for the scenarios where it makes sense. It makes development a lot easier for scenarios where you can break down your solution into small stateful entities.
- The code that developers need to write can be kept highly focused on solving the business needs, instead of on the clustering, networking and scaling, as this is all managed by Project Orleans under the hood, abstracted away.
- Project Orleans makes heavy use of code generators. By simply implementing marker interfaces, the source generators will automatically add code to your classes during the build. This keeps your code simple and clean.
- Getting started with Project Orleans is just a matter of adding references to a couple of NuGet packages, and adding a few lines of code to the startup of your application. After that, you can start creating Grains, by simply adding a new interface and implementation.
In this article, we will take a look at Project Orleans, which is an actor model framework from Microsoft. It has been around for a long time, but the new version, version 7, makes it a lot easier to get started with, as it builds on top of the .NET IHost abstraction. This allows us to add it to pretty much any .NET application in a simple way. On top of that it abstracts away most of the complicated parts, allowing us to focus on the important stuff, the problems we need to solve.
Project Orleans
Project Orleans is a framework designed and built by Microsoft to enable developers to build solutions using the actor-model, which is a way of building applications that enables developers to architect and build certain types of solutions in a much easier way than it would be to do it using for example an n-tier architecture.
Instead of building a monolith, or a services-based architecture where the services are statically provisioned, it allows you to decompose your application into lots of small, stateful services that can be provisioned dynamically when you need them. On top of that, they are spread out across a cluster more or less automatically.
This type of architecture lends itself extremely well to certain types of solutions, for example IoT devices, online gaming or auctions. Basically, any solution that would benefit from an interactive, stateful “thing” that manages the current state and functionality, like a digital representation of an IoT device, a player in an online game, or an auction. Each of these scenarios become a lot easier to build when they are backed by an in-memory representation that can be called, compared to trying to manage it using an n-tier application and some state store.
Initially, Orleans was created to run Halo. And using the actor-model to back a game like that makes it possible to do things like modelling each player as its own tiny service, or actor, that handles that specific gamer’s inputs. Or model each game session as its own actor. And to do it in a distributed way that has few limitations when it needs to scale.
However, since the initial creation, and use in Halo, it has been used to run a lot of different services, both at Microsoft and in the public, enabling many large, highly scalable solutions to be built by decomposing them into thousands of small, stateful services. Unfortunately, it is hard to know what systems are using Orleans, as not all companies are open about their tech stack for different reasons. But looking around on the internet, you can find some examples. For example, Microsoft uses it to run several Xbox services (Halo, Gears of War for example), Skype, Azure IoT Hub and Digital Twins. And Honeywell uses it to build an IoT solution. But it is definitely used in a lot more places, to run some really cool services, even if it might not be as easy as you would have hoped to see where it is used.
As you can see, it has been around for a long time, but has recently been revamped to fit better into the new .NET core world.
The actor pattern
The actor pattern is basically a way to model your application as a bunch of small services, called actors, where each actor represents a “thing”. So, for example, you could have an actor per player in an online game. Or maybe an actor for each of your IoT devices. But the general idea is that an actor is a named, singleton service with state. With that as a baseline, you can then build pretty much whatever you want.
It might be worth noting that using an actor-model approach is definitely not the right thing for all solutions. I would even say that most solutions do not benefit very much from it, or might even become more complex if it is used. But when it fits, it allows the solution to be built in a much less complex way, as it allows for stateful actors to be created.
Having to manage state in stateless services, like we normally do, can become a bit complicated. You constantly need to retrieve the state that you want to work with for each request, then manipulate it in whatever way you want, and finally persist it again. This can be both slow and tedious, and potentially put a lot of strain on the data store. So, we often try to speed this up, and take some of the load of the backing store using a cache. Which in turn adds even more complexity. With Project Orleans, your state, and functionality, is already instantiated and ready in memory in a lot of cases. And when it isn’t, it handles the instantiation for you. This removes a lot of the tedious, repetitive work that is needed for the data store communication, as well as removes the need for a cache, as it is already in memory.
So, if you have any form of entity that works as a form of state machine for example, it becomes a lot easier to work with, as the entity is already set up in the correct state when you need it. On top of that, the single threaded nature of actors allows you to ignore the problems of multi-threading. Instead, you can focus on solving your business problems.
Imagine an online auction system that allows users to place bids and read existing bids. Without Orleans, you would probably handle this by having an Auction service that allows customers to perform these tasks by reading and writing data to several tables in a datastore, potentially supported by some form of cache to speed things up as well. However, in a high-load scenario, managing multiple bids coming in at once can get very complicated. More precisely, it requires you to figure out how to handle the locks in the database correctly to make sure that only the right bids are accepted based on several business rules. But you also need to make sure that the locks don’t cause performance issues for the reads. And so on …
By creating an Auction-actor for each auction item instead, all of this can be kept in memory. This makes it possible to easily query and update the bids without having to query a data store for each call. And because the data is in-memory, verifying whether a bid is valid or not is simply a matter of comparing it to the existing list of bids, and making a decision based on that. And since it is single-threaded by default, you don’t have to handle any complex threading. All bids will be handled sequentially. The performance will also be very good, as everything is in-memory, and there is no need to wait for data to be retrieved.
Sure, bids that are made probably need to be persisted in a database as well. But maybe you can get away with persisting them using an asynchronous approach to improve throughput. Or maybe you would still have to slow down the bidding process by writing it to the database straight away. Either way, it is up to you to make that decision, instead of being forced in either direction because of the architecture that was chosen.
Challenges we might face when using the actor pattern
First of all, you have to have a scenario that works well with the pattern. And that is definitely not all scenarios. But other than that, some of the challenges include things like figuring out what actors make the most sense, and how they can work together to create the solution.
Once that is in place, things like versioning of them can definitely cause some problems if you haven’t read up properly on how you should be doing that. Because Orleans is a distributed system, when you start rolling out updates, you need to make sure that the new versions of actors are backwards compatible, as there might be communication going on in the cluster using both the old and the new version at the same time. On top of that, depending on the chosen persistence, you might also have to make sure that the state is backwards compatible as well.
In general, it is not a huge problem. But it is something that you need to consider. Having that said, you often need to consider these things anyway, if you are building any form of service based solution.
Actor-based development with Project Orleans
It is actually quite simple to build actor based systems with Project Orleans. The first step is to define the actors, or Grains as they are called in Orleans. This is a two-part process. The first part is to define the API we need to interact with the actor, which is done using a plain old .NET interface.
There are a couple of requirements for the interface though. First of all, it needs to extend one of a handful of interfaces that comes with Orleans. Which one depends on what type of key you want to use. The choices you have are IGrainWithStringKey
, IGrainWithGuidKey
, IGrainWithIntegerKey
, or a compound version of them.
All the methods on the interface also need to be async, as the calls might be going across the network. It could look something like this:
public interface IHelloGrain : IGrainWithStringKey
{
Task SayHello(User user);
}
Any parameters being sent to, or from the interface, also need to be marked with a custom serialisation attribute called GenerateSerializer
, and a serialisation helper attribute called Id. Orleans uses a separate serialisation solution, so the Serializable attribute doesn’t work unfortunately. So, it could end up looking something like this:
[GenerateSerializer]
public class User
{
[Id(0)] public int Id { get; set; }
[Id(1)] public string FirstName { get; set; }
[Id(2)] public string LastName { get; set; }
}
The second part is to create the grain implementation. This is done by creating a C# class, that inherits from Grain, and implements the defined interface.
Because of Orleans being a little bit magi – more on that later on – we only need to implement our own custom parts. So, implementing the IHelloGrain
could look something like this:
public class HelloGrain : Grain, IHelloGrain
{
public async Task SayHello(User user)
{
return Task.FromResult($“Hello {user.FirstName} {user.LastName}!”);
}
}
It is a good idea to put the grains in a separate class library if you are going to have a separate client, as both the server and client part of the system need to be able to access them. However, if you are only using it behind something else, for example a web API, and there is no external client talking to the Orleans cluster, it isn’t strictly necessary.
A thing to note here is that you should not expose your cluster to the rest of the world. There is no security built into the cluster communication, so the recommended approach is to keep the Orleans cluster “hidden” behind something like a web API.
Once the grains are defined and implemented, it is time to create the server part of the solution, the Silos.
Luckily, we don’t have to do very much at all to set these up. They are built on top of the IHost interface that has been introduced in .NET. And because of that, we just need to call a simple extension method to get our silo registered. That will also take care of registering all the grain types by using reflection. In its simplest form, it ends up looking like this:
var host = Host.CreateDefaultBuilder()
.UseOrleans((ctx, silo) => {
silo.UseLocalhostClustering();
})
.Build();
This call will also register a service called IGrainFactory
, that allows us to access the grains inside the cluster. So, when we want to talk to a grain, we just write something like this:
var grain = grainFactory.GetGrain(id);
var response = await grain.SayHello(myUser);
And the really cool thing is that we don’t need to manually create the grain. If a grain of the requested type with the requested ID doesn’t exist, it will automatically be created for us. And if it isn’t used in a while, the garbage collector will remove it for us to free up memory. However, if you request the same grain again, after it has been garbage collected, or potentially because a silo has been killed, a new instance is created and returned to us automatically. And if we have enabled persistence, it will also have its state restored by the time it is returned.
How Project Orleans makes it easier for us to use the actor pattern
Project Orleans removes a lot of the ceremony when it comes to actor-based development. For example, setting up the cluster has been made extremely easy by using something called a clustering provider. On top of that, it uses code generation, and other .NET features, to make the network aspect of the whole thing a lot simpler. It also hides the messaging part that is normally a part of doing actor development, and simply provides us with asynchronous interfaces instead. That way, we don’t have to create and use messages to communicate with the actors.
For example, setting up the server part, the silo, is actually as simple as running something like this:
var host = Host.CreateDefaultBuilder(args)
.UseOrleans(builder =>
{
builder.UseAzureStorageClustering(options => options.ConfigureTableServiceClient(connectionString));
})
.Build();
As you can see, there is not a lot that we need to configure. It is all handled by conventions and smart design. This is something that can be seen with the code-generation as well.
When you want to interact with a grain, you just ask for an instance of the interface that defines the grain, and supply the ID of the grain you want to work with. Orleans will then return a proxy class for you that allows you to talk to it without having to manage any of the network stuff for example, like this:
var grain = grainFactory.GetGrain(id);
var response = await grain.SayHello(myUser);
A lot of these simplifications are made possible using some really nice code generation that kicks into action as soon as you reference the Orleans NuGet packages.
Where can readers go when they want to learn more about project Orleans and the actor model?
The easiest way to get more information about building solutions with Project Orleans is to simply go to the official docs of Project Orleans and have a look. Just remember that when you are looking for information about Orleans, you need to make sure that you are looking at documentation that is for version 7+. The older version looked a bit different, so any documentation for that would be kind of useless unfortunately.
Where to go from here?
With Project Orleans being as easy as it is to get started with, it makes for a good candidate to play around with if you have some time left over and want to try something new, or if you think it might fit your problem. There are also a lot of samples on GitHub from the people behind the project if you feel like you need some inspiration. Sometimes it can be a bit hard to figure out what you can do with a new technology, and how to do it. And looking through some of the samples gives you a nice view into what the authors of the project think it should be used for. I must admit, some of the samples are a bit, well, let’s call it contrived, and made up mostly to show off some parts of the functionality. But they might still provide you with some inspiration of how you can use it to solve your problem.
For me, I ended up rebuilding an auction system in a few hours just to prove to my client how much easier their system would be to manage using an actor based model. They are still to implement it in production, but due to the simplicity of working with Project Orleans, it was easy to create a proof of concept in just a few hours. And I really recommend doing that if you have a scenario where you think it might work. Just remember to set a timer, because it is very easy to get carried away and just add one more feature.
In tech, it is rare to find something as complicated as clustering being packaged into something that is as simple to work with as Project Orleans. Often the goal is to make it simple and easy to use, but as developers we tend to expose every single config knob we can find to the developer. Project Orleans has stayed away from this, and provides a nice experience that actually felt fun to work with.

MMS • Robert Krzaczynski
Microsoft has introduced the C# Dev Kit, a new extension to Visual Studio Code, offering an enhanced C# development environment for Linux, macOS and Windows. This kit, combined with the C# extension, uses an open-source Language Server Protocol (LSP) host to provide an efficient and configurable environment. The source repository for the extension is currently being migrated and will be available later this week.
The C# Dev Kit brings familiar concepts from Visual Studio to make VS Code programming in C# more productive and reliable. It includes a collection of VS Code extensions that work together to provide a comprehensive C# editing environment that includes artificial intelligence-based programming, solution management and integrated testing. The C# extension provides language services, while the C# Dev Kit extension builds on the Visual Studio foundation for solution management, templates and debugging testing. Moreover, the optional IntelliCode for C# Dev Kit extension provides the editor with programming capabilities based on artificial intelligence.
Source: https://devblogs.microsoft.com/visualstudio/announcing-csharp-dev-kit-for-visual-studio-code/
This tool streamlines project management in C# programming by adding a Solution Explorer view that integrates with the VS Code workspace. It allows developers to effortlessly add projects and files to their solutions using templates. The extension simplifies the organisation of tests for XUnit, NUnit, MSTest and bUnit, displaying them in the Test Explorer panel. The C# Dev Kit, based on the open source C# extension with LSP (Language Server Protocol) host, provides exceptional performance and integrates with Roslyn and Razor for advanced features such as IntelliSense, navigation and code formatting.
In addition, the IntelliCode for C# Dev Kit extension, which is installed automatically, provides AI-assisted support beyond the basic IntelliSense technology. It offers advanced IntelliCode features such as whole-line completion and starred suggestions, prioritising frequently used options in the IntelliSense completion list based on the personal codebase. With the C# Dev Kit, users can experience increased performance and reliability not only when programming, but also when managing solutions, debugging and testing.
By installing the C# Dev Kit extension, users of the VS Code C# extension (supported by OmniSharp) can upgrade to the latest pre-release version compatible with the C# Dev Kit, as explained in the documentation.
The following question was raised under this post on Microsoft’s website:
Does this enable creating solutions from scratch as well, or does it still require an initial dotnet new console and open folder? It is been a while since I last checked VS Code out, but that has been bugging me every time to create a new solution.
Tim Hueuer, a principal product manager at Microsoft, answered:
If you have no folder open (blank workspace) you’ll see the ability to create a project from there. I can’t paste a picture here in the comments but there is a button on a workspace with no folder open that says “Create .NET Project” that will launch the template picker with additional questions of where to create it.
Additionally, Leslie Richardson, a product manager at Microsoft, added that further information about the solution explorer experience in VS Code can be found here.
Overall, the tool gets positive feedback from the community. Users see a significant improvement in using Visual Studio Code to code in C#.

MMS • Anthony Alford

Meta AI open-sourced the Massively Multilingual Speech (MMS) model, which supports automatic speech recognition (ASR) and text-to-speech synthesis (TTS) in over 1,100 languages and language identification (LID) in over 4,000 languages. MMS can outperform existing models and covers nearly 10x the number of languages.
MMS is based on the wav2vec model and is pre-trained on a dataset containing 491K hours of speech in 1,406 languages, which is based on existing cross-lingual datasets as well as a new dataset of 9,345 hours of unlabelled recordings of religious text readings, songs, and other speech in 3,860 languages. To fine-tune the ASR and TTS models, Meta used recordings of Bible readings in 1,107 languages, which provided labeled cross-lingual speech data. The fine-tuned MMS models can perform ASR and TTS in those 1,107 languages as well as LID in 4,017 languages. According to Meta,
Many of the world’s languages are in danger of disappearing, and the limitations of current speech recognition and speech generation technology will only accelerate this trend. We envision a world where technology has the opposite effect, encouraging people to keep their languages alive since they can access information and use technology by speaking in their preferred language.
Training speech processing AI models using supervised learning requires large datasets of labeled speech data—usually audio recordings paired with transcripts. For many languages such as English, such datasets are readily available; however, for low-resources languages with very few native speakers, collecting a large dataset might be impossible. Meta’s previous research on XLS-R and NLLB showed that a single cross-lingual model combined with self-supervised pre-training can, after fine-tuning on small amounts of data, perform well on approximately 100 languages, even on low-resource ones. More recently, InfoQ covered OpenAI’s Whisper and Google’s USM, which also support around 100 languages each.
To scale their model to handle thousands of languages, Meta needed an audio dataset with more languages. The team chose to use audio recordings of the Christian New Testament; this provided labeled audio data in over 1,000 languages, with an average of 32 hours per language. Although each language’s recording was a single speaker, usually male, the researchers found that this introduced very little bias in the final models: the models performed similarly in female and male benchmark audio. They also did not find any bias due to the model being trained largely on religious texts.
Meta’s chief AI scientist Yann LeCun called out several highlights of MMS on Twitter, noting in particular it has “half the word error rate of Whisper.” Several users pointed out that the model’s usefulness was limited by its non-commercial license. Another user pointed out other drawbacks, and questioned whether it was indeed better than Whisper:
In my testing, it performs worse than Whisper for transcription to text, mis-hearing words and not hearing implied punctuation. Also it’s about 10x slower than Faster-Whisper. [MMS] uses 20 GB of RAM, while Whisper uses about 1 GB. For these reasons and others this is fairly impractical for people to use for a real application. Also note that you need to specify the language being spoken while Whisper will identify it for you. Hope these issues get resolved over time and OpenAI has a competitor eventually in this area.
The MMS code and pretrained model files are available on GitHub. A list of the supported languages for each task (ASR, TTS, and LID) is available online.

MMS • Almir Vuk

June update brings a new and updated version of the .NET Upgrade Assistant. With the latest release, the dotnet team ensures that the CLI tool now includes all the new features which are already available within the Visual Studio Extension engine. The latest release now offers developers a choice between Visual Studio and CLI experiences, allowing them to take advantage of the latest features and improvements offered by the .NET upgrade tool.
The Upgrade Assistant CLI tool is a valuable resource designed to aid developers in upgrading their applications to the most recent version of .NET. It also facilitates the migration process from older platforms, such as Azure Functions, WinForms, Xamarin.Forms and UWP, to newer alternatives. Now, this functionality is accessible through both the Visual Studio and command line experiences, providing developers with flexibility in their preferred workflow.
In a recent development, the .NET Upgrade Assistant CLI tool has been updated with a new engine, which mirrors the one used in the Visual Studio extension of Upgrade Assistant. This update enables developers to seamlessly transfer various types of applications while leveraging the power of AI during the upgrading process. As a result, developers now have access to enhanced capabilities within the tool, enhancing their overall upgrading journey.
The original announcement post provides step-by-step instructions for installing and utilizing the newly released .NET Upgrade Assistant CLI tool. To begin, users are advised to install the global .NET tool by executing the following command: dotnet tool install -g upgrade-assistant
. For those who already have the tool installed and wish to update it to the latest version, the recommended command is dotnet tool update -g upgrade-assistant
.
Once the tool is successfully installed, users can proceed with porting their applications. By navigating to the directory containing the desired project, developers can initiate the upgrade process by executing the command upgrade-assistant upgrade
. Notably, the CLI tool offers an interactive interface that facilitates the selection of the specific project to be upgraded and the target .NET version. Utilizing the arrow keys, users can navigate the available options and press Enter to initiate the chosen task. This interactive approach simplifies the upgrading process and ensures efficient utilization of the tool’s functionalities.
In addition to this, a user named Mark Adamson wrote a comment with a proposal for future features. On June 2, 2023, the user wrote the following:
This looks great. It would be really useful if it had an option to purely convert a legacy project to the SDK project format. This is often the first step we do when migrating a legacy application because we can then get it merged in reasonably quickly and then follow up with updates to .net standard and then .net core at a later point.
Olia Gavrysh, an author of the original release post, published the tweet which gathered some of the community feedback. And one of those comments was related to the Entity Framework Core upgrade, a user named Maikel van Haaren, wrote the following:
The new CLI experience is looking good! Is there some list of other changes made to the tool? And whats the plan on supporting the upgrade from EF to EF Core? Any more concrete details on that?
… and Olia’s answer was:
the tool now supports all project types except WebForms (and upgrades for WCF will come soon). It has an improved way of upgrading the projects and all dependencies, such as nuget packages, using AI for some code upgrades, etc.
Based on these community discussions, developers can expect that support for the upgrade of WCF to WCF Core is coming soon to the .NET Upgrade Assistant tool.
Lastly, a survey is also available, to gather feedback on the newly updated features for upgrading .NET projects from CLI tools. Users are encouraged to share their experiences and suggest improvements by participating in the survey. More info and details about .NET Upgrade Assistant can be found on the official Microsoft dotnet website.

MMS • Claudio Masolo

AWS announced the support of Kubernetes version 1.27, called Chill Vibes, for Amazon EKS and Amazon EKS Distro. In this version of Kubernetes there are a lot of new features that are generally available and some of them are potentially destructive for the clusters.
In 1.27 the seccomp is graduated as stable and activated by default: the RuntimeDefault seccomp profile will be used as default for all workloads. Passing –kubelet-extra-args "–seccomp-default"
flag in the node bootstrap script or launch template, the default seccomp profile is set for all containers running on the node. In this way the seccomp profile is defined by the container runtime, instead of using the unconfined (seccomp disabled) mode. When the seccomp profile is enabled, some workloads may experience breakages, but it is possible to disable or create custom profiles for specific workloads. The security-profiles-operator allows defining and managing custom profiles for the workloads.
This Kubernetes version has some features that allow a better management of the pod topology and an easier way to spread balanced pods across various domains. In particular, #3022 unveils the minDomains parameter: gives the administrator the ability to set the minimum number of domains your pods should occupy, thereby guaranteeing a balanced spread of workloads across the cluster. #3094 introduces the nodeAffinityPolicy
and nodeTaintPolicy
parameters, which allow for an extra level of granularity in governing pod distribution according to node affinities and taints. This particular feature is linked with the NodeInclusionPolicyInPodTopologySpread
gate. Lastly, #3243 implements the matchLabelKeys
field in the topologySpreadConstraints
in the pod’s specification, which permits the selection of pods for spreading calculations following a rolling upgrade.
In the previous versions, the Amazon EKS kubelet had a 10 requests per seconds limit for kubeAPIQPS
with a burst limit of 20 requests for kubeAPIburst
. In the 1.27 version, the kubeAPIQPS
limit is raised to 50 requests per second and the kubeAPIBurst
is now 100 requests per second. These new limits are adopted by the Amazon EKS optimized AMI and improve the pod start time when there is a demand of scaling requirements. These new limits allows Amazon EKS kubelet to manage faster the pod startups and allows smoother cluster operations.
In version 1.27 some API are deprecated as the other Kubernetes releases: k8s.gcr.i
o is frozen and now registry.k8s.io
is the new repository for the Kubernetes images. It is important to update all the manifests and the configurations.
The seccomp alpha annotations (seccomp.security.alpha.kubernetes.io/pod
and container.seccomp.security.alpha.kubernetes.io
) have been removed, these annotations were already deprecated in version 1.19. A possible script to check where these annotations are used in a specific cluster is the following:
kubectl get pods --all-namespaces -o json | grep -E 'seccomp.security.alpha.kubernetes.io/pod|container.seccomp.security.alpha.kubernetes.io'
Since version 1.24 the default container runtime for Amazon EKS has been the Containerd. In the 1.27 the --container-runtime
command for Kubelet is removed so It is mandatory to remove the --container-runtime
argument for all the node creation scripts and workflow. In Terraform it’s important to remove the bootstrap_extra_args field:
node_groups = {
eks_nodes = {
desired_capacity = 2
max_capacity = 10
min_capacity = 1
instance_type = "m5.large"
k8s_labels = {
Environment = "test"
Name = "eks-worker-node"
}
additional_userdata = "echo foo bar"
bootstrap_extra_args = "--container-runtime=your-runtime"
}
And in eksctl:
nodeGroups:
- name: your-nodegroup-name
instanceType: m5.large
desiredCapacity: 3
minSize: 1
maxSize: 4
kubeletExtraConfig:
container-runtime: "your-runtime"
It is important to upgrade the Amazon EKS cluster to a supported version. The latest Amazon EKS that is out-of-support is 1.22 and the end-of-support date was June 4 2023, the next one is version 1.23 that will be end-of-support in October 2023.

MMS • RSS

(Benny Marty/Shutterstock)
PayPal last month released the source code for JunoDB, a distributed key-value store it developed internally and which today powers a variety of backend services at the payment site, including 350 billion transaction requests per day, the company says.
JunoDB was originally developed over a decade ago in C++ to address the specific needs of the company, according to a May 17 blog post by Yaping Shi, principal MTS, architect at PayPal. The company was moving to a microservices architecture that would require supporting a large number of persistent inbound connections to data stores, but the company’s IT architects couldn’t find a suitable product to support that approach.
“Since no commercial or open-source solutions were available to handle the required scale out-of-the-box, we developed our own solution to adopt a horizontal scaling strategy for key-value stores,” Shi writes.
The new database would addresses two primary scaling needs in distributed key-value stores, according to Shi: handling the a large number of client connections, and handling growth in read and write throughput.
PayPal database developers created JunoDB with a proxy-based architecture to enable horizontal scaling. The JunoDB client library, which resides in the application, was developed to enable simple data actions through the JunoDB proxy, which manage requests from the clients, coordinates with the data stored on the JunoDB storage server, and provides load balancing. All data is encrypted, either at the client or the proxy layer using TLS; all stored data is also encrypted using TLS.

JunoDB architecture (Source: JunoDB GitHub page)
JunoDB utilizes consistent hashing to partition data and minimize data movement. To support horizontal scale, it shards data among a number of database partitions located on server nodes. It also uses shards within shards, or “micro shards,” which serve as building blocks for data redistribution, Shi writes.
“Our efficient data redistribution process enables quick incremental scaling of a JunoDB cluster to accommodate traffic growth,” Shi writes. “Currently, a large JunoDB cluster could comprise over 200 storage nodes, processing over 100 billion requests daily.”
JunoDB has since been rewritten in Golang to provide multi-threading and multi-core capabilities. With JunoDB’s data replication methods, including within-data center and cross-data center replication, the key-value store delivers six 9’s of system availability for PayPal.
JunoDB has become a critical part of PayPal’s infrastructure, and powers almost all of the company’s applications today. That includes use as a temporary cache for data, to reduce loads on relational databases, as a “latency bridge” for Oracle applications, and to provide “idempotency,” or a reduction in duplicate processing.
“While other NoSQL solutions may perform well in certain use-cases, JunoDB is unmatched when it comes to meeting PayPal’s extreme scale, security, and availability needs,” Shi writes.
The database is named after Juno, who was the queen of heaven in Greek mythology.
PayPal has released JunoDB under a permissive Apache 2.0 license. You can download JunoDB from GitHub at github.com/paypal/junodb.
Related Items:
PayPal Feeds the DL Beast with Huge Vault of Fraud Data
There’s a NoSQL Database for That
Vendors Compete to Make Serverless NoSQL in the Cloud Drop-Dead Simple