Botswana President Duma Boko Set to Legalize Undocumented Zimbabweans By … – iHarare News

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Botswana President Duma Boko Set to Legalize Undocumented Zimbabweans By Granting Them Temporary Work and Residence Permits

In a groundbreaking move, Botswana’s newly elected president, Duma Boko, has announced plans to provide temporary work and residence permits to undocumented Zimbabweans in the country.

Duma Boko, who recently made history by unseating Botswana’s ruling party of 58 years, shared his vision for the policy change during an interview with the BBC Africa Daily podcast.

The initiative is intended to address both the challenges and opportunities posed by Zimbabweans living in Botswana, many of whom have fled economic hardships in their home country.

Also Read: Botswana’s Ruling Party Loses Power After Nearly 60 Years, Early Election Results Show

Aiming to Formalize Zimbabwean Presence in Botswana

President Boko, 54, emphasized the need for an organized system to legalize the presence of Zimbabweans.

He acknowledged that while many Zimbabweans work in low-wage roles, such as domestic and farm labour, their undocumented status limits their access to basic amenities and often drives them to live outside the law.

They come in and are undocumented. Then their access to amenities is limited, if it is available at all, and what they then do is they live outside the law and they commit crimes – and this brings resentment.

So what we need to do is to formalise, have a proper arrangement that recognises that people from Zimbabwe are already here,” President Duma Boko said.

Botswana President Duma Boko Set to Legalize Undocumented Zimbabweans
Botswana President Duma Boko Set to Legalize Undocumented Zimbabweans (Image Credit: The Habari Network)

President Boko highlighted how a lot of Zimbabwean migrants in Botswana take on jobs that local citizens often find undesirable.

“A lot of these workers from Zimbabwe perform tasks that the citizen finds unattractive… they do jobs that would otherwise not get done, and so there’s no conflict there,” Duma Boko said

Economic Integration and Skills Development

Boko’s plan aims to not only address labour shortages but also foster skills development among local citizens. He highlighted the opportunity for Batswana to learn essential skills, such as welding and plumbing, from Zimbabwean workers.

In any and every construction site in Botswana, the majority of people with those skills are from Zimbabwe, so we need to do a twin programme of allowing them to come in and we utilise the skills that they have, and in the process of utilising these skills we also engage in some sort of skills transfer,” he said.

Boko further explained that blocking skilled workers from entering Botswana would hinder the country’s growth, particularly in industries where there are skill shortages.

We can’t stop people with skills from coming in when we don’t have the skills ourselves – we need to develop these skills and it takes time, so in the interregnum, we need to have them come in properly, come in legally and be rewarded appropriately for the skills that they bring,” he added.

Follow Us on Google News for Immediate Updates

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB: Playing The Long Game In A Fiercely Competitive Market | Seeking Alpha

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

This article was written by

Stephen Ayers profile picture

7.51K Followers

I’m an investment analyst with several years of clinical experience and a Master of Business Administration, now expanding my focus beyond biotech and healthcare into a range of sectors. I’ve had the privilege of sharing my insights on Seeking Alpha since 2017. Outside of spending time with my family, it is my favorite thing to do. My work emphasizes financial modeling techniques, such as Discounted Cash Flow (DCF) analysis, to uncover hidden assumptions within stock valuations across multiple industries. By deconstructing metrics like revenue growth rates and cash flow margins, I provide scenario-based forecasts to illustrate potential investment outcomes, allowing readers to assess what is reasonable. My approach integrates probabilistic forecasting and robust strategies, inspired by works like Superforecasting and Antifragile. I advocate for a barbell strategy, balancing 90% safe assets like Treasuries and market ETFs with 10% in high-growth opportunities, ensuring disciplined risk management.Interested in connecting or collaborating? Feel free to drop me a message!

Analyst’s Disclosure: I/we have no stock, option or similar derivative position in any of the companies mentioned, and no plans to initiate any such positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

This article is intended to provide informational content and should not be viewed as an exhaustive analysis of the featured company. It should not be interpreted as personalized investment advice with regard to “Buy/Sell/Hold/Short/Long” recommendations. The predictions and opinions presented are based on the author’s analysis and reflect a probabilistic approach, not absolute certainty. Efforts have been made to ensure the information’s accuracy, but inadvertent errors may occur. Readers are advised to independently verify the information and conduct their own research. Investing in stocks involves inherent volatility, risk, and speculative elements. Before making any investment decisions, it is crucial for readers to conduct thorough research and assess their financial circumstances. The author is not liable for any financial losses incurred as a result of using or relying on the content of this article.

Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


‘Cost us the game’: Tottenham boss reveals reason behind Galatasaray loss

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Postecoglou laments first half display as Tottenham are beaten by Galatasaray

Tottenham Hotspur‘s first defeat of the Europa League came against Galatasaray by a 3-2 margin on Thursday evening. After four successive wins in the league phase of the competition, a goal by Yunus Akgun and a brace by Victor Osimhen sank the Lilywhites, who remain in a strong position on the continental stage nonetheless.

Tottenham backed to 'beat anybody' enroute to teo key season objectives.
Tottenham’s first half display against Galatasaray cost them the match, as per Postecoglou

Ange Postecoglou spoke to the media after the game and was quizzed regarding his side’s poor display. The manager opined on what cost Spurs the game as Reuters quoted him saying the following.

“Obviously, disappointing result. First half wasn’t great, we just didn’t handle things well at all. Particularly with the ball, just really wasteful and gave it away way too many times, unnecessarily. That allows them to get a foothold in the areas that they’re good at. I think that first half ultimately cost us the game and disappointing for us.”

Tottenham conceded thrice in the first half, including twice in 10 minutes after the half-hour mark to Victor Osimhen. The Nigerian’s quick-fire double eventually put the game past the north Londoners. The team started the second half on the front foot but were handed a backseat just 15 minutes in as Will Lankshear’s red card reduced them to 10 men.

Dominic Solanke came off the bench to score a goal with 20 minutes remaining but the home side held its nerve to pick up its third win of the Europa League. The man advantage obviously worked well in Galatasaray’s favour but their ability to bury chances proved to be the turning point in a match that Spurs could have salvaged a point out of, at the very least.

An amazing period of seven or eight days did not end on the best note for Tottenham as they failed to back up wins against Manchester City and Aston Villa win another difficult one away from home, but they will soon have the opportunity to hit back against Ipswich Town on matchday 11 of the Premier League in Sunday’s clash.

More Tottenham Hotspur News:

Indeed, they will look for three crucial points in the English top flight in their hunt of a top four finish.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MLW Announces Return to Chicago with MLW Azteca Lucha – Gerweck.net

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Filed to GERWECK.NET:

Chicago, IL – November 6, 2024 – Major League Wrestling (MLW) is thrilled to announce its highly anticipated return to Chicago on Saturday, May 10, 2025, for MLW AZTECA LUCHA at Cicero Stadium.

Following the overwhelming success of the last two sold-out Chicago events, MLW is ready to deliver another electrifying night of action that fans will not want to miss.

As a special thank you to Chicago’s passionate wrestling community for their incredible support, tickets for MLW AZTECA LUCHA will start at just $10.

Tickets go on sale next Wednesday, November 13 at 10 am CT and will be available at LuchaTickets.com and Eventbrite. With the previous two events selling out in advance, fans are encouraged to act fast and secure their seats.

MLW AZTECA LUCHA promises an unforgettable experience featuring your favorite MLW fighters and world-class CMLL luchadores.

“Chicago has been an incredible city for lucha, and we’re excited to bring another night of super lucha to Cicero Stadium,” said AZTECA LUCHA promoter Cesar Duran. “MLW AZTECA LUCHA will showcase the best of MLW and our partnership with CMLL.”

Don’t delay in purchasing your tickets to witness MLW AZTECA LUCHA live in Chicago. With a history of sold-out events, this is your chance to be part of another historic night at Cicero Stadium.

Event Details:

MLW AZTECA LUCHA
Date: Saturday, May 10, 2025
Venue: Cicero Stadium, Chicago, IL
Tickets on Sale: Wednesday at 10 am CT
Starting Price: $10
Where to Buy: LuchaTickets.com and Eventbrite

For more event information, visit mlw.com.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Paul Robinson believes Tottenham will want to retain senior star with fast expiring contract

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Son Heung-min is a veritable Tottenham Hotspur great; there is no denying this fact.

And yet, he is reaching an age where every player and their employers have to sit down and talk about the future. Son and Spurs are due to do that this season.

According to one former player of the club, however, the Lilywhites may be willing to offer their captain more than a short-term deal.

Paul Robinson believes Tottenham will want to keep Son Heung-min on board for years to come

Paul Robinson believes Tottenham will want to keep Son Heung-min on board for years to come

Speaking exclusively to Tottenham Hotspur News, former England and Tottenham goalkeeper Paul Robinson said that he believed the club would be willing to offer Son Heung-min more than just a year-long extension on his deal.

“I wouldn’t be surprised if they were in talks to extend his contract further. The relationship he’s got with the fans and how he’s thought of at the club, I suspect he’ll stay as long as he wants to.

“Without a recognised No.9 last season, he really carried the mantle until Dominic Solanke came in this summer. It’s kind of accepted that, while he only has seven months left on his deal, we’re not talking about it constantly.

“I would be surprised if it was only going to be a year. The fact that we’re not hearing anything, I think the year’s extension is already a given and it is quiet because it could be more. I wouldn’t be surprised if it was two or three years.”

Robinson on Son’s future in North London.

Recent reports suggest the club are inclined towards triggering the one-year extension clause in Son’s £9.88m-per-year contract, which is currently set to run out at the end of the ongoing season. However, no official talks have been held between the player and the club.

The South Korea and Tottenham captain has made nine appearances for the club so far this season across all competitions, registering three goals and assists each.

Wisdom in brevity

Thanks to modern sports science, player have been able to prolong their stay at the top levels of professional football, though a decline after they hit thirties is still inevitable for any footballer; you cannot outrun time, after all.

This decline comes at varying stages and manifests differently for everyone. With Son, there are no signs of a drastic fall yet, but one can never truly discern until when that will not happen, even with all the right tools at their disposal.

So it’s unlikely Tottenham will hand their captain three more years right away. Son is still very much capable of leading the line, but expect him to get only year-long deals going forward. At best, we can expect him to get a two-year deal that would run until June 2027, days before his 35th birthday.

More Tottenham News:

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Nightshade – Gerweck.net

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Launching AI Agents Across Europe at Breakneck Speed With an Agent Computing Platform

MMS Founder
MMS Arun Joseph Patrick Whelan

Article originally posted on InfoQ. Visit InfoQ

Transcript

Joseph: We are here to share some of our key learnings in developing customer facing LLM powered applications that we deploy across Europe, across Deutsche Telekom’s European footprint. Multi-agent architecture and systems design has been a construct that we started betting pretty early on in this journey. This has evolved since then into a fully homegrown set of tooling, framework, and a full-fledged platform, which is fully open source, which now accelerates the development of AI agents in Deutsche Telekom. We’ll walk you through our journey, the journey that we undertook, the problem space where we are deploying these AI agents in customer facing use cases. We’ll also give you a deep dive into our framework and tooling, and also some code and some cool demos that we have in store for you.

I am Arun Joseph. I lead the engineering and architecture for Deutsche Telekom’s Central AI program, which is referred to as AICC. It’s AI Competence Center. With the goal of deploying AI across Deutsche Telekom’s European footprint. My background is primarily engineering. I come from a distributed systems engineering background. I’ve built world class teams across U.S., Canada, and now in Germany, and also scalable platforms like IoT platforms. Patrick Whelan is a core member of our team and lead engineer of AICC, and also the platform, who has contributed so much to open source. A lot of these components that you might see are from Pat.

Whelan: It’s been a year now, Arun recruited me for this project. When I started out I had very basic LLM knowledge, and I thought everything would be different. It turns out, a lot has pretty much stayed the same. It’s been very much a year full of learnings. This presentation is really much a product of that. Not only that, I think it’s worth noting that this is very much from the perspective of an engineer, and how we took this concept of LLMs.

Frag Magenta 1BOT: Problem Space

Joseph: Let’s dive into the problem space that we are deploying this technology, especially in Deutsche Telekom. There is a central program for customer sales and service automation referred to as Frag Magenta. It’s called the Frag Magenta 1BOT program. The task is simple. How do you deploy GenAI across our European footprint, which is around 10 countries? Also, for all of the channels through which customers reach us, which is chat channel, voice channel, and also autonomous use cases where this might come in.

Also, as you would have noticed, these European countries would require different languages as well. Especially at the time when we built RAG-based chatbots, this is not something which can really scale, unless you really have a platform to solve these hard challenges. How do you build a prompt flow, or a use case which requires a different approach in voice channel, as against the chat channel. You cannot send links, for example, in the voice channel. Essentially, this is where we started off.

Inception

It is important to understand the background, to understand some of the decisions that we made along this journey. To attack the problem space, we started back last year, somewhere around June, when a small pizza team was formed to look into the emerging GenAI scope. We were primarily looking into RAG-based systems and see whether such a target can be achieved back then. This is an image inspired from the movie, Inception. It’s a movie about dream hacking. There are these cool architects in the movie who are dream architects, and their job is the coolest job in the world. They create dream worlds and inject into a dreamer so that they can influence and guide them towards a specific goal. When we started looking at LLMs back last year around this time, this is exactly how we felt as engineers and architects.

On one hand, you have this powerful construct which has emerged, but on the other side, it is completely non-deterministic. How do you build applications where a stream of tokens or strings can control a program flow. The classical computing is not built for that. How do you build? What kind of paradigms can we bring in? Essentially, at that point in time, LangChain was a primary framework which was there for building LLM RAG applications. OpenAI had just released the tool calling functionality in the OpenAI APIs. LangChain4j was a port which was also emerging, and nothing particularly available in the JVM ecosystem. It’s not really the JVM ecosystem, but rather the approach towards a scalable solution, versus you build functions on top of a prompt was not particularly appealing if you really wanted to build something which is scalable.

Also, as Deutsche Telekom, we had huge investments on the JVM stack. A lot of our transactional systems were on the JVM stack. We have SDKs, client libraries already built on the JVM stack, which allows data pulls and also observability platforms. What skillsets do you require to build these applications, was a question. Is it an AI engineer? Does it require data scientists? Certainly, most models were not production ready. I remember having conversations with some of the model providers, or the major model providers, none of them advised, don’t put it in front of customers. You always need to have a human in the loop. The technology is going to emerge. If you look at the problem space, and with this background, it was pretty clear we cannot take a rudimentary approach in building something and expect it to work for all these countries with different business processes, APIs, specifications.

Multi-Agent Systems Inspiration

This also provided an opportunity for us. This is what Pat was referring to. I looked at it, and it was pretty clear, there is nothing from a framework standpoint or a design standpoint which exists to attack this. It was pretty clear, models can only get better. It is not going to get any worse. What constructs can you build today, assuming the models are going to get better, which is going to stand the test of time in building a platform which allows democratization of agents? That’s how I started looking into open-source contributors within Deutsche Telekom, and we brought a team together to look at it as a foundational platform that need to be built.

Minsky has always been an inspiring figure. This is a 1986 set of essays, he always talked about agents and mind, and mind is a construction of agents. I wanted to highlight one point here. The recent OpenAI’s o1 release, or how that model is trained, is not what we are referring to here. We are referring to the programming constructs which are required if you want to build the next generation of applications at scale. Certainly, the different specialists for different processes collaborating with each other. What is the communication pattern? How do you manage the lifecycle of such entities? These were the questions we had to answer.

Our Agent Platform Journey Map

We set out on a journey wherein we decided we will have to build the next Heroku. I remember exactly telling Pat, we have a chance to build the next Heroku. This is how I started recruiting people, while doing this, at a point where there was RAG. Back in September, it’s been one year since this journey, we started releasing our first use cases, which was a FAQ RAG bot on LangChain. Today, what we have is a fully open-source set of multi-agent platform, which we will talk about in this journey, which provides the constructs to manage the entire lifecycle of agents: inter-agent communication, discovery, advanced routing capabilities, and all that. It’s not been an easy ride. We’re not paid to build frameworks and tooling. We are hired to solve business problems.

With that in mind, it was clear that the approaches of rudimentary prompt abstractions and functions on top is not going to scale if you want to build this platform. How many developers in data centers are going to be hired, if you took this approach and then go across all those countries? We have around 100 million customers only in Europe, and they reach us through all these channels. We knew that voice models are going to emerge, so we needed something fundamental, it was pretty clear. We decided to bet on that curve. We started looking at building the stack with one principle in mind, how can you bring in the greatest hits of classical computing, and bake it into a platform? We started creating a completely ground-up framework back then, and we ported the whole RAG pipeline, which was the RAG agent or the RAG construct, which we released back then, and ported onto the new stack. It had two layers.

One we referred to as kernel, because we were looking at the operating system constructs, and we decided these constructs, every developer need not handle it, let’s create a library out of it. Then we have another layer, which, at that point in time was the IA platform, or the Intelligent Agents platform, where developers were developing customer facing use cases. This was referred as a code named LMOS, which stands for Language Models Operating System. We had a modulith back then. We chose Kotlin because we knew that, at that point in time, we had huge investments in JVM stack. We also knew that we have to democratize this. There was a huge potential with DSLs, which Kotlin brings in. Also, the concurrency constructs of Kotlin, what is the nature of application that we see? The APIs are going to be the same OpenAI APIs. They might get enhanced, but you need advanced concurrency constructs. That’s why we went with Kotlin-first approach back then.

Then, in February, when the first tool calling agents were released, this was the billing agent, one API, and Pat was the guy who released it. You can ask the Frag Magenta chatbot, what’s my bill? It should return. This was a simple call, but essentially built entirely on the new stack. We were not using even LangChain4j, or Spring AI at that point in time. Then we realized, as we started scaling our teams, the entry barrier we have to reduce. There was still a lot of code which had to be written. The DSL started to emerge, which brought down the democratizer. It’s called the LMOS ARC, which is the agents reactor, as we call it.

By July this year, we realized that it’s not only the frameworks and platforms which is going to accelerate this, we needed to change, essentially, the lifecycle of developing applications. Because it’s a continual iteration process, prompts are so fragile and brittle. There are data scientists, engineers, evaluation teams, so the traditional development lifecycle need to be changed. We ran an initiative called F9 which is derived out of Falcon 9 from SpaceX. Then we started developing agents, and we brought down the development time of developing a particular agent to 12 days. In that one month, we almost started releasing 12 use cases in that month. Now we are at a place where we have a multi-agent platform which is completely cloud native. This is what we will talk about now.

Stats (Frag Magenta 1BOT, and Agent Computing Platform)

Some of the numbers, what we have today. We have started replacing some of the use cases in Frag Magenta, with the LLM powered agents. We have had so far, more than a million questions answered by the use cases for which we have deployed this, with an 89% acceptable answer rate. That is more than 300,000 human-agent conversations deflected with a risk rate under 2%. Not only that, we were able to benchmark what we built against some of the LLM powered vendor products. We did the A/B testing in production, and this is around 38% agent handovers were better in comparison to the vendor products, for the same use cases that we tried. Going back to the inception analogy, one of the things with the dream architects, is they used to create worlds which are constrained, so that the dreamer cannot go into an infinite, open-ended world.

That is exactly the construct that we wanted to perfect or bring down into the platform, so that the regular use case developers need not worry about it. They used to create these closed loop Penrose steps like constructs that we wanted to bake right into the platform, so the use case developers need not worry about it. Let’s look at some of the numbers of this platform, what it has done. The development time of an agent which represents a domain entity, like for billing, contracts, this is a top-level domain for which we develop agents. When we started, it was 2 months, and now it has brought down to 10 days. This involves a lot of discovery of the business processes, API integration, and everything.

For a simple agent, with a direct API. Also, for the business use cases, once you build an agent, you can enhance it with new use cases. Say you release a billing agent, you can enhance it with a new feature or a use case, like now it can answer or resolve billing queries. This is the usual development lifecycle, not building agents every day. It used to take weeks, and it is now brought down to 2.5 days. Earlier we used to release only one per month. As most of you might know, the brittleness or the fragility of these kind of systems, you cannot release fast, especially for a company with a brand like Deutsche Telekom, it can be jailbreaked if you don’t do the necessary tests.

We brought it down to two per week in production. Risky answer, there are a lot of goof-ups as well, or the latest one was someone jailbreaked, or bought and turned it into a [inaudible 00:15:26] bot or something. The thing is, we need to design for failure. Earlier, we used to reward the whole build, but right now we have the necessary constructs in the platform which allows us to intervene and deploy a fix within hours. That, in essence, is what the platform stands for, which we refer to as agent computing platform, which we will talk about here.

Anatomy of Multi-Agent Architecture

Whelan: Let me get you started off by giving you an overview of our multi-agent architecture. It’s quite simple to explain. We have a single chatbot that’s facing our customer and our user, and behind that, we have a collection of agents, each agent focusing on a single business domain running as a separate, isolated microservice. In front of that, we have an agent router that routes each incoming request to one of those agents. This means, during a conversation, multiple agents can come into play. At the bottom here we have the agent platform, which is where we integrate services for the agents, such as the customer API and the search API. The search API is where all our RAG pipelines reside. The agents themselves, they don’t really have to do much of this RAGing, which obviously simplifies the overall architecture.

There were two main key factors for us choosing this kind of design. There’s a lot of pros and cons. The first one is, we needed to upscale the number of teams working on the application. We had a very ambitious roadmap, and the only way we were going to achieve this is by multiple teams working on the application in parallel. This is a great design for that. Then we have this prompt Jenga. Basically, LLM prompts can be fragile, and whenever you make a change, no matter how small, you are at risk of breaking the entire prompt. With this multi-prompt agent design, worst case is you break a single agent as opposed to having the entire chatbot collapse, kind of like Jenga. This is definitely something we struggled with quite a bit at the beginning.

The Evolution of the Agent Framework

That’s the top-level design. Let’s go one level deeper and take a look at the actual code. What I have here on the left is one of our first billing agents. We had a very traditional approach here. We had a billing agent class, an agent interface. We had an LLM executor to call the LLM. We had a prompt repository to pull out prompts. We mixed the whole thing up in this execute method. As you can see, there’s a lot happening in there. Although this was a good start, we did identify key areas that we simply had to improve. The top one being this higher knowledge barrier. If you wanted to develop the chatbot, you basically had to be a Spring Boot developer. For a lot of our teammates, who were data scientists, they were more familiar with Python, so this is a little tricky for them.

Even if you were a good Spring Boot developer, there’s a lot of boilerplate code you needed to learn before you could actually become a productive member of the team. Then we were also missing some design patterns, and also the whole thing was very much coupled to Spring Boot. We love Spring Boot for sure, but we were building some really cool stuff, and we wanted to share it, not only with other teams, but as Arun pointed out, with the entire world. This gave birth to ARC. ARC is a Kotlin DSL designed specifically to help us build LLM powered agents quickly and concisely, where we’re combining the simplicity of a low-code solution with the power of an enterprise framework. I know it sounds really fancy, but this started off as something really simple and really basic, and has really grown into our secret sauce when it comes to achieving that breakneck speed that Arun goes on about all the time.

Demo – ARC Billing Agent

Let’s go through a demo. We’re now going to look at our billing agent. We’ve simplified it for the purpose of this demo. What I show you is stuff that we actually have in production and should be relevant no matter what framework you use. This is it. This is our ARC DSL. Basically, we start off defining some metadata, like the name and the description. Then we define what model we want to use. We’re currently transitioning to 4o. Unfortunately, every model behaves differently, so it’s a big achievement to get it to migrate to a newer model. Unfortunately, the models don’t always behave better. Sometimes we actually see a degrade in our performance. That’s also quite interesting. Here in the settings, we always set the temperature to 0 and have the static seed.

This makes the LLM far more reproducible, the results a lot more reproducible. It also reduces the overall hallucinations of the bot. Then we have some filter inputs and outputs and tooling, and we’ll take a look at that. First, let’s take a look at the heart of an agent, the system prompt. We start off by giving the agent a role, some context, a goal, an identity. Then we start with some instructions. We like to keep our instructions short and concise. There’s one instruction here I would like to highlight, which I always have in all my prompts, and that is, we tell the LLM to answer in a concise and short way. Combining this with the settings we had up there really reduces the surplus information that the LLM gives.

At the beginning, we had the LLM giving perfect answers, and then following up with like, and if you have any further questions, call this number. Obviously, the number was wrong. The combination of these settings and this single line in the prompt really reduces the surplus information. Then we add down here, you can see we’re adding the customer profile, which gives extra context to the LLM. It also highlights the fact that this entire prompt is generated on each request, meaning we can customize it, tailor it for each customer, each NatCo, or each channel, which is a very powerful feature that we really lie on heavily. There we go.

Now we come to the knowledge block. Here we’re basically listing the use cases that the LLM agent is meant to handle, together with the solution. We also have here some steps, which is how we do a little bit of dialog design, dialog flow. I’ll demonstrate that. As you can see, the knowledge we’re injecting here isn’t actually that much. Obviously, in production, we have a lot more knowledge, but we’re talking about maybe one or two pages. With modern LLMs that have a context window of 100,000 characters, we don’t need RAG pipelines for the majority of our agents, which super simplifies the overall development. Let’s take a look at this filter in and outputs. These constructs here allow us to validate and augment the in and output of an agent.

We have, for example, here, this CustomerRequestAgentDetector. If a customer comes and they ask specifically for a human agent, then this will trigger this filter, and that process would be then triggered. We then also have a HackingDetector. Like any other software, LLMs can be hacked, and with this filter here, we can detect that, and it will throw an exception, and the agent will no longer be executed. Both these filters, in turn, themselves, use LLMs to decide if they need to be triggered or not. Then, once the output has been generated, we clean up the output a bit. We can often see these back ticks and these back tick JSONs. This happens because we’re feeding the LLM in the system prompt with a mixture of Markdown and JSON, and this often happens in the output.

We can simply remove these by just simply putting a minus and this text. Then, we want to detect if the LLM is fabricating any information. Here, we can use regular expressions within this filter to extract all the links and then verify that these links are actually valid links that we expect the LLM to be outputting. Then, finally, we have this UnresolvedDetector. As soon as the LLM says it cannot answer a question, this filter will be triggered, and then we can do a fallback to another agent, which, in most cases, is the FAQ agent, which in turn holds our RAG pipelines, then should hopefully be able to answer any question that the billing agent itself cannot answer.

These are LLM tools. LLM tools are a great way to extend the functionality of our agent. As you can see here, we have a lot of billing related functions like get_bills, get_open_amount, but we also have get_contracts. This is a great way for our agents to share functionality between each other. Usually, you would have a team that has already built these functions for you, but if you need to build it yourself, don’t worry, we have a DSL for that as well. As you can see here, we have a function, it has got a name, get_contracts. We give it a description, which is very important, because this is how the LLM determines whether this function needs to be called. What is unique to us is we have this isSensitive field.

As soon as the customer is pulling personalized data, we mark the entire conversation as sensitive and apply higher security constructs to that conversation. This is obviously very important to us. Then within the body, we can simply get the contracts, as you can see here, a little bit of magic. We don’t have to provide any user access token. All this happens in the background. Then we generate the result. Because this result of this function is fed straight back into the LLM, it’s very important for us that we anonymize any personal data. Here we have this magical function here, anonymizeIBAN, which will anonymize that data so that the LLM never sees the real customer data. Again, it’s a little bit of magic, because as soon as the customer gets the answer, or just before the customer gets the answer, this will be deanonymized, so that the customer sees its own data. That’s now functions.

I think it’s time now to look at it in action. Let me see if this is working. Let’s see, and ask, how can I pay my bill? You see this? It’s asking us a question, whether we’re talking about mobile and fixed line. Say, mobile. I’m really happy this works. LLMs are unpredictable, so this is great. As you can see here, we have actually implemented a slight dialog flow. We’ve triggered the LLM to execute this step before showing the information. This is important, because a lot of the time, if we go back here to the system prompt, you can see here that we are giving the LLM two options, two IBANs, and the LLM naturally wants to give the customer all the data it has. Without this step that we’ve defined up here, the LLM will simply return this massive chunk of text to the customer. We want to avoid that. These steps are a very powerful mechanism allowing us to simplify the overall response for the customer. I think that’s it.

This is the entire agent. Once we’ve done this, once we’ve done the testing, we just basically package this as a Docker image and upload it into our Docker registry.

Joseph: What Pat shied away from saying is it’s just two files. It’s pretty simple. Why did we do this? We wanted access for our developers, who are already knowing the ecosystem. They would have built APIs for contracts and billing. They are familiar with the JVM ecosystem. These are two scripting files. These are Kotlin scripts, so it is provided to the developer, and it can be given to the data scientists, along with the view. It comes with the whole shebang for testing.

One Agent is no Agent

We’ll do a quick preview of the LMOS ecosystem. Because, like I said, the plan is not to have one agent. We needed to provide those constructs of managing the entire lifecycle of agents. One agent is no agent. This comes from the actor model. We used to discuss this quite a lot when we started. How do you design the society of agents? Should it be the actor approach? Should there be a supervisor? In essence, where we come up with was, don’t reinvent the wheel, but provide enough constructs which allows extensibility of different patterns. Billing agent, from a developer point of view, what they usually do is just develop the business functionality and then just push it as a Docker image. We’ll change that into Helm charts in a bit. It is not enough if you want this to join the system.

For example, the Frag Magenta bot, it’s composed of multiple agents. You would need discoverability. You would need version management, especially for multiple channels. Then there’s dynamic routing, routing between agents, which are the agents that need to be picked up for a particular intent. It can be a multi-intent query as well. Not only that, the problem space was huge, multiple countries, multiple business processes. How do you manage the lifecycle when everything can go around with one change in one prompt? All those learnings from building microservices and distributed systems, they’ll still apply. That means we needed to bring that enterprise grade platform to run these agents.

LMOS Multi-Agent Platform

This is the LMOS multi-agent platform. The idea is, just like Heroku, the developer only does the Docker push or the git push Heroku master. Similarly, we wanted to get to a place where git push agent or LMOS master. Everything else should be taken by this platform. What it actually has is we have built a custom control plane, which is called the LMOS control plane. It is built on existing constructs around Kubernetes and Istio. What it allows is, agents are now a first-class citizen in the fabric, in the ecosystem, as a customer source, and so is the idea of channels. Channel is the construct where we group agents to form a system, for example, Frag Magenta. We needed agent traffic management.

For example, for Hungary, what is the traffic that you need to migrate to this particular agent? Tenant channel management. Also, agent release is also a continuous iteration process. You cannot just really develop that agent and push it to production and believe that all is going to work well. You needed all those capabilities. Then we also have a module called LMOS RUNTIME, which is bootstrapping the system with all the agents required for a particular system.

We’ll show a quick walkthrough of a simple agent. For example, there is a weather agent, which is supposed to work only for Germany and Austria. We have introduced the custom channels, need to be available only for the web, and the app channels. Then we provide these capabilities. What does this agent provide as capabilities? This is super important. Because it’s not only the traditional routing based on weights and canaries, which is important, now multi-agent systems require intent-based routing, which you cannot really configure, which is what the LMOS router does.

Essentially, it provides bootstrapping of even the router based on the capabilities which an agent advertises once it’s pushed into the ecosystem. We wanted to build this not as a closed platform where you can only run your ARC agent, or the agents in JVM or Kotlin, we were also keeping a watch on rest of the ecosystem catching up, or it’s going much faster. There is also, you can bring your own Python, LangChain, LlamaIndex, whatever agent. The idea is it can all coexist in this platform if it follows the specifications and the runtime specifications that we are coming up with. You can also bring the Non ARC Agent, wrap it into the fabric, deploy it, and even the routing is taken care by this.

We will show a quick demo of a multi-agent system. It is composed of two agents, a weather agent and a news summarization agent. We will start by asking a question to summarize a link. The system should not answer because this agent is not available in the system right now. There is only one agent right now. Let’s assume Pat had developed a news agent and deployed it and just did the LMOS push. Right now, just as Helm charts, it’s packaged as Helm charts, and it’s just installed. As you can see, there’s a custom resource, you can manage the entire lifecycle with the very familiar tooling that you already know, which is Kubernetes. Now we apply a channel.

For example, the UI that you’ve shown us, assume that this should be made available only for Germany and for one channel. Agents should be available only for that channel, along with the view that it should not result in additional routing configurations usually, which means, with the agent advertising, I can now handle news summary use cases. The router is automatically bootstrapped, and now it dynamically discovers, drops the traffic for this particular channel, and the router picks up the right agent. Of course, it’s a work in progress. The idea is not to have one strategy. If you look at all the projects which were there, LMOS control plane, LMOS router, LMOS runtime, these are all different modules which provides extensibility hooks so that you can come up with your own routing strategies if need be.

Takeaways

Whelan: When I started this project a year ago, as I said, I thought everything would change. I started burning my Kotlin books. I thought, I’m going to be training LLMs, fine-tuning LLMs, but really nothing much has changed. At its core, our LLM bodies know much about data processing and integrating APIs, and LLMs is just another API to integrate. At least nothing has changed yet. That said, we see a new breed of engineer coming out. I’m an engineer. I spend 500 hours prompt engineering, prompt refining. What we see is this term being coined, LLM engineer. Though a lot has stayed the same, we’re still using a lot of the same technologies, a lot of the same tech stack. Some of the capabilities that we want from our developer is definitely growing in this new age of LLMs.

Joseph: Especially if you’re an enterprise, we have seen this. There are many initiatives within Deutsche Telekom, and we often see that everyone is trying to solve these problems within an enterprise twice, thrice. The key part is you need to figure out a way in which this can be platformified, like you build your own Heroku, so that these hard concerns are handled by the platform and it allows democratization of building agents.

You need not look for AI engineers, per se, for building use cases, but what you need to have is a core platform team, how you can build this. Choose what works best for your ecosystem. This has been quite a journey, going against the principles, so let’s use this framework, that framework. Why would you want to build it from scratch, and all that. So far, we’ve managed to pull it off. I’m pretty sure the reason why, if it needed to continue, it needed to be open sourced, because the open-source ecosystem thrives on ideas and not just frameworks, and we wanted to bring all those contributions back into the ecosystem.

Summary

Just to summarize the vision that we had when we started this journey, we did not want to just create use cases. We saw an opportunity that if we could create the next computing platform, ground-up, what would be the layers it might look like, like the network architecture or the traditional computing layers that we already are familiar with? At the bottom most layer, we have the foundational computing abstractions, which allows prompting optimization, memory management, how to deal with LLMs, the low-level constructs. The layer above, what we see, the single agent abstractions layer, how do you build single agents? What tooling, frameworks can we bring in which allows this? On top of that, the agent lifecycle, which is Claude, or a Lang, or whatever it is, you need to manage the lifecycle of agents. It is different from the traditional microservices.

It brings in additional requirements around shared memory conversations, the need for continuous iterations, the need to release only to specific channels to test it out, because no one knows. The last one is the multi-agent collaboration layer, which is where we can build the society of agents. If you have these abstractions, it allows thriving set of agents which can be open and sovereign, so that we don’t end up in a closed ecosystem of agents provided by whomsoever, are the monopolies who might emerge in this space. We designed LMOS to absorb each of these layers. This is the vision. Of course, we are building use cases, but this has been the construct which has been in our minds when we started this journey. We have all those layers open sourced. All of those modules are now open sourced, and it’s an invitation for you to also join us in defining our GitHub org, and defining the foundations of agentic computing.

Questions and Answers

Participant 1: I would be interested in the QA process for those agents, how do you approach it? Do you have there some automation? Do you run this with other LLMs? Is there human in the loop, something like that?

Joseph: The key part is, there’s a lot of automation requirements. For example, in Deutsche Telekom, we needed human annotators to start with, because there is no particular technique by which you can fully say that an automated pipeline to figure out hallucinations or risky answers is there. We started out with human annotators. Slowly, we are building the layer which restricts the perimeter of the risky questions that might come up.

For example, if somebody had flagged this question or the nature of these questions, it can go into that list of test cases which runs, execute it against a new release of that agent. It’s a continual iteration process. Testing is a really hard problem. That’s also the reason why we need all those guardrails absorbed somewhere, so that the developer need not worry about all that, most likely. Also, the need to reduce the blast radius and release it only for maybe 1%, 2% of the customers, get feedback. These are the constructs that we are in. The solution to a fully automated LLM guardrailing is not yet there. If you’re defining the perimeter as small, of an agent, it allows testing also, much better.

Whelan: Testing is awful. It’s very tricky. That’s especially why we wanted to have these isolated microservices so we can really limit the damage, because often when we do break something we don’t realize until it’s too late. Unfortunately, it’s not a problem that we’re going to solve, I think, anytime soon, and we still need human agents in the middle.

Participant 2: Basically, as far as I understood, it’s the chatbot at the end, so it’s just available for the user, and then there’s an underlying form of agents. Do you have active agents that can do this stuff? Not like in this example, that provide the information in some forms or get projections from the system, like contracts, but really do the stuff, so do the changes in the system, maybe irreversible ones, or something like that.

Joseph: Yes. For example, if you want to take actions, it’s essentially API calls from a simplicity construct. If you want to limit the perimeter, for example, update IBAN was a use case that is awaiting the PSA process, but we built it because you needed to get the approval of this privacy and security thing. It really works. Essentially, the construct of an agent that we wanted to bring in is the ability to take actions autonomously, is a place that you can get to. Also, for multiple channels, since you mentioned chatbot, the idea is, what is the right way to split an agent so you don’t replicate the whole thing again for different channels? What is that right slicing? There could be features that might be built in, which allows it to be plugged in for the voice channel as well. For the example, the billing agent, is not only we are deploying it for chat, we are also now using the same constructs for the voice channels, which should potentially take also actions like asking customer authentication and also initiating actions.

Participant 3: I’m quite interested in the response delay. I saw you have hierarchical agent execution, as well as within the agent, we’re seeing billing agent example, you have two filters, like some of the hack filter. Do they execute, or do they invoke GPT in a sequential order or in a parallel way? If it is in sequential order, how do you guys minimize or optimize for the delay?

Whelan: We have two ways we can do it. Often, we execute the LLM sequentially, but in some cases, we also run them in parallel, where it’s possible. For the main agent logic, for the system prompt and everything, we use a higher model, like 4o. For these simpler filters, we usually use lower models, like 4o mini, or even 3.5 which execute a lot faster. Overall, this is something that can take a few seconds, and we’re looking very much forward to models becoming faster.

Joseph: What you saw here was the ARC construct of building agents, which allows quick prototyping. We are now releasing it as a way for developers to work with. What is there in production also has this elementary construct called LMOS kernel, which we built, which is not based on this kind of construct of a simple prototyping element, which essentially looks like a step chain. For example, if you say, for an utterance that comes in, you first want to check whether it contains any PII data. You needed to remove the PII data, which requires named entity recognitions to be triggered, which is a custom model that we run internally, which we have fine-tuned for German.

Then the next step could be, also check whether this contains an injection prompt. Is it safe to answer? All of that could potentially be triggered within that loop that we have in parallel as well. There are two constructs. We have only shown one construct here, which allows this democratization element, but we are still getting into that way of, how do you balance programmability, which brings in these kind of capabilities. We might be able to extend the DSL. This is fully extensible. The ARC DSL is extensible. You can come up with new constructs like repeat, in parallel, and a couple of function calls, and it can execute in parallel. That’s also the beauty of the DSL also we are coming up with.

Participant 4: You built a chat and voice bot, and it seems like it was a lot of work. You had to get into agents, you had to get into LLMs, you had to build a framework, and you also dealt with issues that they only have with LLMs, like hallucination. Why did you not pick a chatbot or voice bot system off the shelf? Why did you decide to build your own system?

Joseph: Essentially, Frag Magenta right now, if you check today, it’s not completely built on this. We already had Frag Magenta before we started this team. It is based on a vendor product, and it follows the dialog pre-design, which used to be the previous case. It’s not like we built this yesterday, so we already had a bot. The solution rates, however, was low. Because dialog tree-based approaches you can never expect what the customer might ask, the traditional. You used to have their custom DSL, which looks like a YAML file, where you say, if customer asks this, do this, do that. That’s where this came in. When LLMs came in, we decided, should we not try out a different approach? There was a huge architectural discussion, POCs created.

Should we go with fluid flows especially in a company like Deutsche Telekom? If you leave it out, everything open for the LLMs, you never know what brand issues you might end up with, versus the predictability in the dialog tree. This is a key point that came in, in our design. I showed this number, 38% better than vendor products. We came up with the design, at least we think this is the right course of action. It’s a mix between the dialog tree versus a complete fluid flow wherein you’re not guardrailing at all. This is this programmability that we are bringing in, which allows this dialog design, which combines both, which used to show better results. That 38% was, in fact, comparing. The vendor product also came up with LLMs, but the LLM was used as a slot filling machine, but this was performing better. We are migrating most of the use cases into this new architecture.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Harare City Council Speaks Out on the Demolition of Houses in Belvedere Amid Social Media Outcry

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Harare City Council Speaks Out on the Demolition of Houses in Belvedere Amid Social Media Outcry

Harare City Council recently carried out demolitions of illegally built houses in Belvedere.

Videos and images of excavators tearing down newly constructed homes quickly circulated on social media, triggering an outcry among Zimbabweans.

Many netizens criticized the council, accusing it of being insensitive to affected residents.

Also Read: Zimbabwe High Court Protects Homeowners in Landmark Ruling Against Demolitions

City Council Explains Demolitions and Addresses Concerns

In response to the public backlash, the Harare City Council took to Facebook to explain the circumstances surrounding the demolitions.

Harare City Council Speaks Out on the Demolition of Houses in Belvedere
Harare City Council Speaks Out on the Demolition of Houses in Belvedere (Image Credit: Facebook/ )

According to the council, dozens of homes in Ridgeview, Belvedere, had been built with fraudulent documents, with some residents claiming state-sanctioned land allocations that proved to be false.

Dozens of houses in an area in Ridgeview, Belvedere were built using fake papers with some illegal settlers claiming that they were allocated the land by the state.”

The council further disclosed that the Belvedere structures were not demolished for the first time. According to the Council, these illegal houses were initially destroyed last year, but the occupants, encouraged by land barons who sold them the plots under false pretences, defied city orders and rebuilt their houses.

The City Council stressed that the residents had received prior warnings and were asked to vacate before any demolition action was taken.

The illegal structures were initially destroyed last year but the occupiers, at the instigation of the land barons who sold them the land, defied the City of Harare and erected new structures. The City of Harare issued the illegal settlers prior warning to vacate the settlement before the demolitions.”

Authorities Warn of Increased Demolition Activity

Acting Director of Planning Samuel Nyabeza highlighted the city’s commitment to intensifying demolitions of unauthorized structures in Harare. He stressed the importance of restoring urban order by ensuring that all settlements are built with necessary approvals and essential services.

Nyabeza noted that housing developments must be carefully planned with access to critical amenities such as water and sewer systems before construction begins.

Nyabeza encouraged residents to verify land ownership and council approvals before purchasing land or beginning construction.

We have to restore order in the city and we will not tolerate a situation where people just build houses without approvals and permission from council. A settlement has to be planned with all amenities in place before people start building.

You cannot build a house without council-approved plans; you cannot build where there are no sewer and water facilities. We urge residents to check with the City of Harare before buying land. Even when building a structure, every stage should have council approvals,” he remarked.

Demolitions to Continue City-Wide

The City of Harare has clarified that these demolitions are part of a broader effort to eliminate illegal settlements.

According to the council, similar demolitions are scheduled to take place across various parts of the city in the coming days to maintain lawful urban planning and prevent unauthorized building activity.

Follow Us on Google News for Immediate Updates

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Portuguese Man O’ War Sightings Have Tybee Island Beachgoers On Alert

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Tybee Island residents and beach visitors are urged to exercise caution after several Portuguese Man o’ War sightings along the beach on Friday morning.

These striking, jellyfish-like creatures, known for their vibrant colors and painful stings, have recently washed ashore, posing a risk to anyone who might accidentally come into contact with them.

Just The Facts:

What: Portuguese Man o’ War sightings along Tybee Island’s shores.

When: Sightings confirmed early Friday morning.

Risk: The tentacles can deliver a powerful sting that may still be active even if the creatures are washed up and appear dead.

Safety Measures: Residents and visitors are advised not to touch the creatures and to keep a safe distance.

About the Portuguese Man o’ War: The Portuguese Man o’ War is not a jellyfish, despite its similar appearance. It’s a siphonophore — a colony of tiny, specialized organisms that work together as one unit. Known for their distinctive purple-blue coloring and transparent “float” that can extend up to six inches above water, these creatures can be challenging to spot but pack a serious sting.

The Man o’ War’s tentacles can stretch up to 30 feet and are armed with venom-filled nematocysts. Even if washed ashore and seemingly lifeless, their tentacles can still deliver a painful sting, potentially causing nausea, fever, and respiratory issues in severe cases.

Why It Matters: With Tybee Island being a popular destination, especially as temperatures remain mild, these sightings are a reminder of the need for safety on the beach. Encounters with a Portuguese Man o’ War can lead to painful stings, particularly dangerous for children or anyone with allergies.

What To Do If Stung: If someone is stung by a Portuguese Man o’ War, follow these steps:

1. Rinse the affected area with salt water, not fresh water, which can trigger more stinging cells.

2. Avoid touching the sting site or rubbing the area, as this may worsen the injury.

3. Seek medical attention if the pain is severe or if there are any signs of an allergic reaction.


Tybee Island's shores are under threat as Portuguese Man o’ War sightings escalate, their deceptive beauty masking painful stings. Caution is crucial; safety measures are paramount to protect beachgoers from these hidden dangers.
B.T. Clark

B.T. Clark is an award-winning journalist with 25-years experience in journalism. His work has appeared in several newspapers throughout the state including Neighbor Newspapers, The Cherokee Tribune and The Marietta Daily Journal. He is the publisher of The Georgia Sun and a fifth-generation Georgian.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Warriors Return to Home to Host Hurricanes – OurSports Central

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Moose Jaw Warriors

November 8, 2024 – Western Hockey League (WHL)
Moose Jaw Warriors News Release

Moose Jaw, SK – The Moose Jaw Warriors are back at the Hangar in downtown Moose Jaw on Friday night to battle the Lethbridge Hurricanes.

After spending five games and 11 days on the road through the BC Division, the Warriors were happy to be back in Moose Jaw this week to work on fine tuning their game.

“Definitely excited to play in front of our own fans, it’s been a little while,” Warriors captain Brayden Yager said. “It’s nice to get some good hours in of practice before this weekend.”

The Warriors didn’t have the road trip that they had hoped in BC, finishing with a 1-4 record over the five games.

Head Coach Mark O’Leary said there are good lessons that Moose Jaw can take from that stretch.

“It’s recognizing that we’re in good places, but understanding the importance of the competitive mindset and digging in when you’re in those spots,” he said. “That’s the thing that’s getting in our way right between a win and a loss.”

The Warriors will take on the Lethbridge Hurricanes on Friday night at the Hangar.

Lethbridge comes in sitting second in the Eastern Conference and leading the Central Division with a 9-5-1-0 record in 15 games this season.

18-year-old forward Miguel Marques leads the Hurricanes with 23 points, while Logan Wormald leads the team with nine goals so far this season.

O’Leary said Moose Jaw’s willingness to work will be key against Lethbridge.

“They check hard,” he said. “They make things real difficult inside the dots and that’s what the game is, taking pucks from along the boards and trying to get them inside dots and that’s where you’re dangerous and on the flipside, trying to keep them from doing it.”

Yager said the Warriors know competing hard is the key to success for them and it will have to start on Friday night.

“They play hard and it’s just going to be a test for us to come in and play even harder,” he said. “There’s not very many teams that work harder than Lethbridge, so if we can come out and match them, we’ll be heading in the right direction.”

The Warriors and Hurricanes meet at 7 p.m. at the Moose Jaw Events Centre. Click here to purchase tickets to the game.

If you can’t make it out, tune into IKS Media Warriors Live on CHL TV, starting with the Pre-Game Show at 6:45 p.m. You can also catch all the action with Voice of the Warriors James Gallo on Country 100.

• Discuss this story on the Western Hockey League message board

The opinions expressed in this release are those of the organization issuing it, and do not necessarily reflect the thoughts or opinions of OurSports Central or its staff.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.