MongoDB – Ausdehnung der Erholung? – Finanzen.net

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Die Aktie von MongoDB (WKN: A2DYB1) befindet sich ausgehend vom zyklischen Hoch vom Februar 2024 bei 509,62 USD in einem intakten übergeordneten Abwärtstrend. Mit dem am 7. April dieses Jahres bei 140,78 USD gesehenen Verlaufstief ging der Anteilsschein mit dem zyklischen Tief vom November 2022 (135,15 USD) auf Tuchfühlung. Die dort bei hohem Volumen geformte bullishe Reversalkerze (Doji) leitete eine dynamische Kurserholung bis auf 174,03 USD ein. Diesen Aufschwung verdaut der Wert aktuell und vollzog dabei einen Pullback an das 50%-Retracement. Ein Anstieg über die nächste Widerstandszone bei 162,49-164,72 USD würde nun ein bullishes Anschlusssignal im kurzfristigen Zeitfenster senden. Potenzielle nächste Erholungsziele lauten im Erfolgsfall 170,04-174,03 USD, 178,10/180,32 USD und 188,66 USD. Ein Rutsch unter die Unterstützungszone 153,48-157,40 USD per Tagesschluss würde derweil zunächst für ein Wiedersehen mit dem Supportbereich 140,78-145,85 USD und eventuell einem zeitnahen Test des 2022er-Tiefs bei 135,15 USD sprechen.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How GitHub Built Sub-Issues into Its Issue Tracking System

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Coinciding with the generally availability of sub-issues, GitHub engineer Shaun Wong shared insights about how they added support for hierarchical issue structures, the lessons learned during development, and the key role sub-issues played in their workflow.

Launched in preview a few months ago, GitHub sub-issues enable developers to organize tasks using a parent-child hierarchy. This structure helps break down complex tasks into smaller, more manageable components. Additionally, by grouping related activities in a hierarchical format, teams can track progress more effectively and provide detailed insights into how each sub-task contributes to the overall project.

For example, a parent issue can broken down into discrete sub-tasks, each assigned to a distinct team in the organization— such as marketing, UI/UX design, backend development, frontend development, and so on.

The first decision GitHub engineers faced was whether to modify the existing task list functionality or design an entirely new hierarchical structure. They ultimately chose the latter, which required significant changes to the underlying data models and rendering logic.

From a data modeling perspective, the sub-issues table stores the relationships between parent and child issues. For example, if Issue X is a parent of Issue Y, the sub-issues table would store this link, ensuring the hierarchical relationship is maintained.

One key feature was the automatic updating of a parent issue’s progress based on its sub-issues, using a sub-issue list table. This eliminated the need to manually check or navigate through the hierarchy to monitor status.

At the implementation level, sub-issues are modeled using MySQL relationships and exposed via GraphQL endpoints, enabling efficient and flexible data retrieval.

According to Wong, their internal use of sub-issues across multiple projects has proven effective in simplifying and accelerating project management.

Our teams found that sub-Issues significantly improved their ability to manage large projects. By breaking down tasks into smaller, actionable items, they maintained better visibility and control over their work. The hierarchical structure also made it easier to identify dependencies and ensure nothing fell through the cracks.

Alongside sub-issues, GitHub also promoted other previewed features to general availability several. These include issue types, which allow classification of issues as bugs, features, tasks, etc; advanced search, with support for complex queries using and and or; and an increased issue limit in GitHub Projects, now supporting up to 50,000 issues.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Introduces MCP Servers for AI-Assisted Cloud Development

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

AWS has announced the open-source release of AWS Model Context Protocol (MCP) Servers for Code Assistants, a suite of specialized servers designed to enhance AI-powered code assistants with AWS best practices. According to the company, these servers leverage AI to provide context-aware guidance to accelerate development, improve code quality, and ensure adherence to security and cost optimization principles.

The open-source release of MCP Servers for Code Assistants bridges AI-powered coding assistants (like Amazon Q, Claude, and Cursor) and AWS services – it enables these assistants to understand the nuances of AWS, offering intelligent suggestions and automating tasks that would otherwise require manual effort and deep AWS expertise.

As the authors of an AWS blog post on the open source release describe:

Model Context Protocol (MCP) is a standardized open protocol that enables seamless interaction between large language models (LLMs), data sources, and tools. This protocol allows AI assistants to use specialized tooling and access domain-specific knowledge by extending the model’s capabilities beyond its built-in knowledge—all while keeping sensitive data local.

In addition, the blog post outlines the following key benefits:

  • Accelerated Development: MCP Servers significantly reduce development time by providing ready-to-use code snippets and configurations based on AWS best practices.
  • Enhanced Security: MCP Servers help developers implement secure configurations, ensuring IAM roles, encryption, and security policies align with AWS Well-Architected principles.
  • Cost Optimization: The Cost Analysis MCP Server provides insights into AWS pricing, helping developers make informed decisions and avoid unnecessary expenses.
  • Access to AWS Knowledge: MCP Servers seamlessly integrate with AWS documentation and knowledge bases, giving AI assistants access to information.
  • Infrastructure as Code (IaC): The AWS CDK MCP Server automates the generation of IaC templates, simplifying infrastructure provisioning.

By leveraging AI to provide context-aware guidance, this open-source initiative has the potential to democratize AWS expertise and accelerate the adoption of secure and efficient cloud development patterns.

In a dev.to post, AWS Community Builder Arthur Schneider writes:

In today’s fast-paced tech world, we’re constantly looking for ways to accelerate development processes while improving quality. Especially in the AWS environment, where complexity increases with each new service, we need smarter tools that make our work easier. This is where MCP comes into play – a protocol that fundamentally changes the way we interact with AI models.

Lastly, the GitHub repository or Pypi package manager provides developers with example implementations to get started.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Sees Unusually High Options Volume (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the recipient of unusually large options trading activity on Wednesday. Traders acquired 23,831 put options on the company. This is an increase of approximately 2,157% compared to the average volume of 1,056 put options.

MongoDB Stock Performance

NASDAQ MDB traded down $0.78 on Friday, reaching $159.26. The stock had a trading volume of 1,549,284 shares, compared to its average volume of 1,814,860. The stock has a market cap of $12.93 billion, a P/E ratio of -58.12 and a beta of 1.49. The business’s 50-day moving average price is $208.15 and its 200 day moving average price is $252.43. MongoDB has a 12-month low of $140.78 and a 12-month high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. During the same period in the previous year, the business posted $0.86 earnings per share. On average, sell-side analysts predict that MongoDB will post -1.78 earnings per share for the current year.

Insiders Place Their Bets

In other MongoDB news, CFO Srdjan Tanjga sold 525 shares of the stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at $1,109,903.56. This represents a 7.57 % decrease in their position. The transaction was disclosed in a document filed with the SEC, which is available through this hyperlink. Also, CEO Dev Ittycheria sold 18,512 shares of the company’s stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $3,207,389.12. Following the completion of the transaction, the chief executive officer now owns 268,948 shares of the company’s stock, valued at $46,597,930.48. The trade was a 6.44 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 48,680 shares of company stock valued at $11,084,027. 3.60% of the stock is currently owned by insiders.

Institutional Trading of MongoDB

Several hedge funds and other institutional investors have recently made changes to their positions in the company. B.O.S.S. Retirement Advisors LLC acquired a new stake in shares of MongoDB in the fourth quarter valued at approximately $606,000. Union Bancaire Privee UBP SA acquired a new position in shares of MongoDB during the fourth quarter valued at approximately $3,515,000. HighTower Advisors LLC lifted its position in MongoDB by 2.0% in the fourth quarter. HighTower Advisors LLC now owns 18,773 shares of the company’s stock worth $4,371,000 after purchasing an additional 372 shares during the period. Nisa Investment Advisors LLC boosted its stake in MongoDB by 428.0% during the 4th quarter. Nisa Investment Advisors LLC now owns 5,755 shares of the company’s stock valued at $1,340,000 after purchasing an additional 4,665 shares in the last quarter. Finally, Covea Finance acquired a new stake in shares of MongoDB during the fourth quarter valued at approximately $3,841,000. Hedge funds and other institutional investors own 89.29% of the company’s stock.

Analyst Upgrades and Downgrades

MDB has been the subject of a number of recent research reports. The Goldman Sachs Group lowered their target price on MongoDB from $390.00 to $335.00 and set a “buy” rating for the company in a research note on Thursday, March 6th. Wedbush cut their target price on shares of MongoDB from $360.00 to $300.00 and set an “outperform” rating on the stock in a research note on Thursday, March 6th. Morgan Stanley decreased their price objective on MongoDB from $315.00 to $235.00 and set an “overweight” rating for the company in a report on Wednesday. Piper Sandler lowered their target price on shares of MongoDB from $425.00 to $280.00 and set an “overweight” rating for the company in a report on Thursday, March 6th. Finally, Daiwa Capital Markets started coverage on shares of MongoDB in a research note on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 price target on the stock. Eight analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has assigned a strong buy rating to the company. Based on data from MarketBeat, MongoDB has an average rating of “Moderate Buy” and a consensus target price of $299.78.

View Our Latest Report on MongoDB

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Free Today: Your Guide to Smarter Options Trades Cover

Learn the basics of options trading and how to use them to boost returns and manage risk with this free report from MarketBeat. Click the link below to get your free copy.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) Stock Rating Upgraded by Redburn Atlantic – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (NASDAQ:MDBGet Free Report) was upgraded by Redburn Atlantic from a “sell” rating to a “neutral” rating in a research report issued on Thursday, Marketbeat Ratings reports. The firm presently has a $170.00 price target on the stock. Redburn Atlantic’s price target points to a potential upside of 6.74% from the company’s previous close.

A number of other equities research analysts have also weighed in on the company. Barclays reduced their price objective on MongoDB from $330.00 to $280.00 and set an “overweight” rating on the stock in a research report on Thursday, March 6th. China Renaissance started coverage on shares of MongoDB in a report on Tuesday, January 21st. They issued a “buy” rating and a $351.00 price objective for the company. Robert W. Baird decreased their price target on shares of MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Tigress Financial increased their price objective on MongoDB from $400.00 to $430.00 and gave the company a “buy” rating in a report on Wednesday, December 18th. Finally, Mizuho reduced their price target on MongoDB from $250.00 to $190.00 and set a “neutral” rating on the stock in a research report on Tuesday. Eight research analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has given a strong buy rating to the stock. According to MarketBeat.com, the stock has an average rating of “Moderate Buy” and an average target price of $299.78.

View Our Latest Report on MDB

MongoDB Trading Down 0.5 %

<!—->

NASDAQ MDB opened at $159.26 on Thursday. The firm’s 50 day simple moving average is $208.15 and its 200 day simple moving average is $252.43. The firm has a market capitalization of $12.93 billion, a price-to-earnings ratio of -58.12 and a beta of 1.49. MongoDB has a 1 year low of $140.78 and a 1 year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business had revenue of $548.40 million during the quarter, compared to analyst estimates of $519.65 million. During the same period in the prior year, the business posted $0.86 EPS. On average, equities research analysts forecast that MongoDB will post -1.78 earnings per share for the current year.

Insider Activity at MongoDB

In other MongoDB news, Director Dwight A. Merriman sold 1,000 shares of the firm’s stock in a transaction on Tuesday, January 21st. The stock was sold at an average price of $265.00, for a total transaction of $265,000.00. Following the transaction, the director now directly owns 1,116,006 shares in the company, valued at approximately $295,741,590. This trade represents a 0.09 % decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through the SEC website. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total transaction of $52,148.25. Following the transaction, the chief accounting officer now directly owns 14,598 shares in the company, valued at $2,529,103.50. The trade was a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last quarter, insiders have sold 48,680 shares of company stock valued at $11,084,027. 3.60% of the stock is owned by corporate insiders.

Institutional Inflows and Outflows

Hedge funds and other institutional investors have recently bought and sold shares of the business. Strategic Investment Solutions Inc. IL purchased a new position in shares of MongoDB in the 4th quarter worth about $29,000. Hilltop National Bank increased its stake in shares of MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after purchasing an additional 42 shares during the last quarter. NCP Inc. bought a new stake in shares of MongoDB in the 4th quarter worth approximately $35,000. Wilmington Savings Fund Society FSB purchased a new position in MongoDB in the third quarter valued at approximately $44,000. Finally, Versant Capital Management Inc grew its holdings in MongoDB by 1,100.0% during the fourth quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock valued at $42,000 after purchasing an additional 165 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Changing the Model: Why and How We Re-Architected Slack

MMS Founder
MMS Ian Hoffman

Article originally posted on InfoQ. Visit InfoQ

Transcript

Hoffman: My talk is also a little bit about astronomy. This is the geocentric model of the solar system. For almost 2,000 years, people believed that the sun and the planets orbit around the Earth. They had pretty good reason to believe this. They saw in the morning that the sun would rise in the east, and at night it would set in the west. They thought naturally that the sun was circling the Earth. They also noticed that the stars didn’t appear to move. You would think if the Earth was moving, then your angle relative to the stars would change. The position of the stars would be different. Furthermore, using this model, ancient astronomers were able to make actually really accurate predictions about the positions of planets. They could predict the position of planets with up to around 90% accuracy. They could use these predictions to navigate, to sail, and to create calendars. This model actually worked really quite well.

By the Middle Ages, there were some issues. This is because the initial observations they had used to make their equations stopped applying as well. They had to continuously revise the model. One real problem with the geocentric model is that sometimes planets appeared to move backwards. We understand why that is today. It’s because the Earth and the planets are orbiting at different rates. When the Earth passes by a planet, that planet appears to move backwards. If everything’s going in a circle around the Earth, there really should be no backwards motion. Ancient astronomers solved this basically by making the model more complex. What they did is they introduced something called an epicycle. An epicycle is a micro-orbit within a planet’s orbit. The planet is going around in a circle, and it’s also going like this. Essentially, by introducing arbitrary epicycles and other such constructs, we might think of them as hacks, they were able to fit this model to the observed data pretty well.

However, in around 1500, Copernicus came along. He revived the heliocentric model of the solar system. This is a model in which the Earth and the planets orbit around the sun. We know this model to be mostly accurate. The Greeks had considered this. They had decided this is clearly nuts, but Copernicus revived it. One of the reasons why he liked it is because he thought it was conceptually simpler. For instance, with this model, theoretically, you don’t need epicycles to explain backwards motion. Copernicus was hamstrung in that he thought that everything has to go in a perfect circle. This was to match his notions of Aristotelian harmony and beauty. Because of this, he had to add back epicycles.

Then Keppler came along about 100 years later. Keppler realized that planets actually orbit in ellipses. Once you allow the planet to move in an elliptical form, then the heliocentric model not only has better predictive power than the geocentric model, for the first time, but is also far simpler. At this point, the heliocentric model became widely adopted by astronomers and scientists. It was also, in part, due to observations by Galileo.

What can we learn from this as software practitioners? First off, if you have a subpar model, a model you might have for many valid and good reasons, you can make that model work for a long time if you just add arbitrary complexity to it. You can just keep adding epicycles and things like that and adding complicated code on top of your already complicated architecture. You can make it work. A better architecture will solve the same problems as an inferior architecture in a simpler way. It will also let you solve additional problems that you probably could not have solved earlier.

Therefore, it pays to sometimes take a step back and to ask, is my foundational model, are my core assumptions still valid? Are they still serving me? Are they making my life easier, or are they making my life a lot harder? As software developers, we rightfully are a little risk-averse. We like to proceed incrementally for all kinds of excellent reasons. That’s usually what we should do. We should maybe add a little complexity here and a little complexity there instead of revising the core model. I’m not advising against that. If you notice yourself solving the same issues over and again, then it’s worth taking a step back and questioning the more core assumptions that you’re making, like Copernicus did.

Background

I’m Ian Hoffman. I’m a staff engineer at Slack. Previously, I worked at Chairish. I’m going to talk about a time when we at Slack revisited our core architecture and made some big changes to it for similar reasons to why the ancients started with this model and then why Copernicus revised it.

Slack Overview

First off, what is Slack? Slack is a communication app for businesses. You can use it if you’re not a business, but it is designed probably primarily for businesses. We have three first-party clients, a desktop/web client written in Electron and React and Redux, and then iOS and Android apps. Our backend is a monolith written in Hack, which is a language like PHP. It’s like a strongly typed, just-in-time compiled version of PHP. Most of our data is stored in MySQL databases sharded using the Vitess sharding system. This is what Slack looked like circa 2015 or something like that.

Slack’s V1 Architecture (The Workspace Model)

I’m going to begin by talking about the evolution of Slack’s architecture in order to motivate the changes we made. Then I’ll describe what the changes we made were, why we made them, and then how we went about making these changes. Finally, I’ll close with some takeaways. Slack began in 2013 with a pretty simple architecture that I like to call the Workspace Model, though it does not have an official name. In this model, a Slack workspace is equivalent to a Slack customer. A workspace contains users, channels, messages, apps, all the things you’re used to in Slack. Slack is this channel-based communication platform. You can enter a channel and send a message to other users who are in that channel. This is all contained within one workspace. Slack also has this concept of apps, which are third-party apps that developers can build and run in Slack. For instance, a bot that you can use to triage tickets from Jira, for example, or to look at issues in GitHub, you can run that in Slack.

Importantly, in this model, each workspace is a closed system. Workspaces share nothing between each other. That means if I, Ian Hoffman, I’m a human being and I have access to multiple Slack workspaces, Slack doesn’t know anything about that. These are separate logins. It’s not one account. This has a nice property, which is that the data for a single workspace can be put on a single database shard.

Basically, the server will route queries from a workspace to a shard. If you want to scale up, you just buy more databases and put more customers on them. We have this core assumption that the data for a single customer would fit on a single database shard. Maybe we have to buy a really big database, but we can still serve all of their traffic with one database. That’s because in the beginning, Slack was targeted at teams with maybe a hundred or maybe a thousand people. The chance of them producing so many messages and channels and just so much data that we couldn’t handle it on like an extra-large MySQL instance, seemed unlikely.

First off, I’ll walk through an example of how this worked in a little more detail. Let’s imagine you have a third-party app that you’ve built as a developer. Let’s take the GitHub app as an example. This lets you do GitHub-y stuff in Slack. That means the Slack client has to find out about this. It’s going to make a REST API call in order to load information about the GitHub app. How this works is the client makes an API call to the server and it has this token. It tells the server, I want to load the GitHub app. It uses this token, which is encrypted and authenticated. The token has a user ID and a workspace ID in it. The server then takes this workspace ID and it knows how to map that workspace to a specific database shard. It’s going to look on that shard for the GitHub app. If it finds the app on that shard, then great, the client is allowed to use the app. Otherwise the app is not installed for that workspace. It’s not allowed. That’s an error. This worked pretty well, but there are some problems.

One I was already hinting at, which is, what if the data for one customer doesn’t fit on one shard? This began to happen because Slack caught on much more than people originally expected. Soon enough, we had customers with 10,000 or 100,000 users sending millions of messages.

At that point we started to really knock over databases. Also, what if customers actually want multiple workspaces? There are all kinds of reasons why a large enterprise might actually want to partition their organization into multiple workspaces, because workspaces act as content boundaries. You can say, I want these employees to have access to this stuff and these employees to have access to this stuff, but I want to administer them as a unit. I want to handle billing in one place. I want to handle security in one place. I want to manage users in one place. That’s impossible to provide if there’s nothing shared between workspaces. We wanted to solve these problems, and we were also running into these scaling issues of just our customers getting too large and workspaces having too much traffic.

Slack’s Architecture V2 (Enterprise Grid)

To solve this, we made our model a little bit more complicated, just like the ancients introduced these epicycles. We introduced our Enterprise Grid architecture in 2017, which was our enterprise product. It’s interesting to note that actually the Workspace Model is still basically how Slack works for our smaller customers. For the really big fish, of which there are many, the Enterprise Grid model became a significant product that we sold a lot of. In Enterprise Grid, a customer can have many workspaces, all under the umbrella of their grid, of their enterprise. A user can belong to many of those workspaces, all within the enterprise. Are any of you aware of a company you work at using Enterprise Grid, or have you used this product? This is what I’m describing here.

Finally, data, such as like channels and apps and that sort of thing, can be shared across multiple workspaces. You can say, I want an announcements channel that is available in every workspace on my enterprise. This makes disseminating data throughout your Slack instance much easier than if you were trying to manage multiple totally isolated workspaces, as people were doing priorly. This is how Enterprise Grid looked. You can imagine this DF, JE, SM, these are just like design mockup nothings, but imagine these are all workspaces under the Acme, Inc. Enterprise. If you’re looking at DF, you’re only seeing data from DF, and same goes for the other ones. You have to switch between these workspaces in order to see everything that you can access within the enterprise.

The architectural implications here. The way we made this work is to say, we have these workspaces, and we’re going to have one special secret, invisible workspace, that serves as the org, the parent of all these other workspaces. Just as the data for each workspace is on its own shard, the data for the parent org is on its own shard too. What is org-level data? It’s any data that is shared. Anything that should be available to the entire organization, or entire enterprise, we use these words interchangeably, but anything that should be available to the whole org lives on the org shard.

Basically channels, apps, that are shared with more than one workspace go on the org shard, and everything else goes on the workspace shards. Now that you can find things in two places, how do you successfully route queries? We did something very simple, which the backend just now queries the current workspace shard and the org shard, always. We just always do two queries. Theoretically, this should decrease the load on any one workspace shard, because now instead of customers with one gigantic workspace, they probably partition their large workspace into many more focused workspaces, therefore spreading out the data and the load across these different workspaces. Here’s another beautiful architectural diagram. It’s not really on par with the models of the geocentric and heliocentric solar system from earlier. I’ll go through the same example again.

In this example, we’re going to load the GitHub app. Again, we pass up this authenticated token. It has the user. It has the workspace. Again, we query the workspace shard. This time, let’s say that this is an app installed on the org level. Let’s say it’s available to the entire org. This time, we find, it’s not available for the workspace. What do we do? We load the org ID for that workspace, assuming the workspace is part of an org. We get back the org ID, and we now route that org ID to a shard. We’re looking now for the GitHub app on the org shard. If we find it, again, we return it to the client, and we let the client use it. What this means is that any workspace under this org will end up querying the same org shard. This is a simple way of making that org-level app available to all the workspaces.

This model works really well, and Enterprise Grid became a really successful thing for Slack, but there were some problems that I’ll now go through. First off, there were some UX issues. When we conceived of Enterprise Grid, the users really belonged to one workspace on average. We weren’t really building something to handle a situation in which one user was in several workspaces, all within the same Enterprise Grid, and had to switch between them to do their work. As Enterprise Grid matured and more companies started to use it, people ended up in more than one workspace within the grid.

Actually, I’ll ask, of people here who use Enterprise Grid, do you know if you’re in more than one workspace? It’s quite common these days. That meant people had to switch between their workspaces to do their work. They would miss activity in workspaces they looked at less. We tried to fix this by introducing hacky things like a threads view and an unreads view, that actually aggregate org-wide data, but within the view of a single workspace. Here is the unreads view, and you can imagine that if this help customer support channel comes from a different workspace than Acme Sites, if you click on it, you would be bounced into that other workspace, which would be jarring. At least it would let you see everything in one place, and so it represented a slight improvement over the prior approach.

There were also bugs. One really interesting bug here is that there were inconsistent views of org-level data. Imagine a channel, like an announcements channel, that’s shared across all the workspaces on an organization. If you’re an administrator in only one of those workspaces, you can modify the channel so you can rename it when you’re looking at it from that particular workspace. When you switch to a different workspace, you now can’t. As a user, this makes no sense and it’s hard to explain to people. That was one persistent class of issues. Also, I’ve described how Slack’s backend, we partition our data by workspaces.

The data for a particular workspace is stored on a shard, on a database server for that workspace. We do the same on clients, actually. We take data from each workspace and all that data is completely separate on the client. It’s not munged together in any way. We have separate client-side data repositories for each workspace, even though you might be looking at multiple workspaces on the same client, you might be logged into several workspaces under your grid. What this means is that things can get out of sync because if you fetch org-level data for one of those workspaces, or let’s say for a few of those workspaces, you’ve now ended up loading this org-level data into the datastores for each of those workspaces.

Now you change it in only one of those workspaces, the one you’re currently looking at, and that update, for whatever reason, might not make it to those other workspaces. Then you’re looking at a stale view when you switch to the other workspace. We fixed a lot of these bugs, but they were inherent in the model. In the fact that you have these multiple views into the same piece of data, there’s always a chance for things to get out of sync. We had to be really vigilant to prevent this bug from reemerging.

Also, this model is just not the most efficient. If you have an org-level piece of data, you’re going to load it again in every single workspace you look at. For example, DMs are org-level. That means every time you go to a workspace within the grid, we’re reloading your DMs. That’s just enormously wasteful. It’s also a larger memory footprint. I was just talking about how we were duplicating org-level data into these workspace-partitioned data repositories on the client. That also just causes memory bloat. We did build an org-level datastore that lets you store org-level data in one place, but this took time to adopt. Slack is an interactive system, so it’s not just the client querying the backend, the backend will push updates to the client.

The way the backend does this is it maintains WebSocket connections for each workspace. Some of our customers actually have thousands of workspaces. That means whenever you make an org-level change that should impact all the workspaces, it has to be fanned out across thousands of WebSocket connections. This is inefficient. Eventually, we built an org-level WebSocket to fix this, but again, that was not widely adopted and took time to adopt. We felt like we were running up against this class of issues where there were these persistent UX problems, there were persistent user experience bugs and confusing bugs of the multiple perspectives bug I talked about earlier, and then there were just inefficiencies with this model too where we were doing a lot of redundant work.

Changing the Model

We decided that maybe we’d been going about this the wrong way. In 2022, we took a step back and we reconsidered some of our core assumptions here, like Copernicus did. We’re on a far lower plane than him, just to be clear. We asked the foundational question, which is, why should users view data from only one workspace at a time? What if you could see everything you needed in a single place? If you’re a user, why do you really care where a channel comes from as long as you know you have access to that channel? Wouldn’t this be a simpler experience? If we had this model, there wouldn’t be any context switching between workspaces because there are no workspaces. There wouldn’t be any missed activity, or at least no missed activity in different workspaces, because, again, there are no workspaces. You can’t have inconsistent views of org-level data because you’re only getting one view of your data.

There can’t be duplicate API calls, because, again, you’re only getting one view of your data. You’re not switching between workspaces. You’re not storing redundant data on clients because you only need to store the data once. You don’t need to store it per workspace anymore. We felt like this nicely simplified a lot of the issues we were running into with Enterprise Grid. We felt like this was also a better foundation for the product to continue evolving. Though Slack had started off with this Workspace Model and had started off being highly all about workspaces, we had moved in a more org-level direction over the years.

A lot of our new features like Canvas, which is our document editor, and Lists, which is like Google Sheets, and of course DMs are org-level by default. These are things that are available across the whole org. It was a weird experience to switch between workspaces but always see the same canvases, the same lists, the same DMs. Giving you an org-level view matched the direction that Slack has been evolving in any ways.

Slack Architecture V3 (Unified Grid)

With this, we introduced our v3 architecture, which I call Unified Grid. In Unified Grid, the user can see everything they can access within the enterprise in one view. However, access control is still determined by workspaces. The workspaces still act kind of like ACL lists, limiting what you can see. Importantly, Unified Grid does not change what users can do. It doesn’t actually change anything about the permissions model. It just reorganizes things in a quite fundamental way. You’ll remember, this is what Enterprise Grid looked like, and this is what Unified Grid looks like. Here you can see that we’ve reclaimed this sidebar, which used to show one tile per workspace. We now can use the sidebar to show other useful things, like all your DMs, all of your activity across all workspaces, your later feature, which is like a to-do list or a set of reminders. You can see within this Acme Inc., channel list, you can imagine that we’re incorporating channels from multiple workspaces all under Acme Inc.

How did we do this? In Unified Grid, the API token no longer determines the workspace shard. You’ll remember in all those prior architectural diagrams I showed, we were passing up the current workspace ID when we were making an API call to the backend. How do we select the workspace to put in that token when we’re in Unified Grid? There is no current workspace. We would have to pick arbitrarily. We don’t do that. Instead, we include the ID of the current org. You’ll remember the org is just a secret hidden workspace. We’re including that ID. Now the server needs to check both the org shard and also multiple workspace shards, because the data we’re looking for could be found on the org shard, as before, or it could be found on the shard for some workspace. We don’t know which one, because you’re looking at everything at once.

This sounds really non-performant, obviously, and we were, of course, a little bit concerned about this. It turns out that we can limit the workspaces that we check to just those that the current user is a member of. Most users are still in just a handful of workspaces, maybe three or four at most. This check ends up being fairly efficient. There is a long tail of users who are in hundreds of workspaces, but it turns out that most of these users are system administrators. They’re not actually using Slack and all those workspaces. They just need to administer them, and that means they really need to get to the admin site for those workspaces, which requires them to be a member of the workspace. The way we handled this was to say, we’re basically going to consider 50 workspaces when we do this check every shard logic, and we’re going to let you edit this list. It’s confusing, but this affects a minuscule subset of users, like we’re talking point something-something.

This strikes a good balance between handling these extreme outliers and allowing us to move forward with this architecture. This is an even more complicated diagram that I’ll go through. Same example as before, we want to load information about the GitHub app to display in Slack. We make an API call, but you’ll notice that this time we’re using the org ID. This E789 number is in the token instead of the team ID. That means that we end up querying the org shard first. We say, is this app installed at the level of the org? Let’s say that it’s not. Let’s go back to our first example where the app is actually just installed for one workspace. We get a miss on the org. The next thing we’ll do is we’ll load up all the workspaces for the user, and that’s a cache hit in memcache, 99.99% of the time. Then with that list, we’ll loop over every workspace, and we’ll query its shard. We’ll query at most 50 workspaces here, if that’s how many the user is in, but commonly we’ll query 3 or 4.

We strongly believe this offered a better user experience, and actually a better developer experience too, and just got rid of this concept of the workspace that was becoming increasingly vestigial, and moved things to be on the org level in a way that really matched the direction we wanted to go at Slack. It was of course a very large change. We were concerned that Unified Grid would be dead in the water from the start. To give you some stats, we had well over 500 API methods that depended on a workspace for routing.

What this means more concretely is that these were API methods that were called by a Slack client, a first party client that would be changing in Unified Grid, would be switching from using the workspace to the org, and that we’re routing to a database table where that routing depended on the workspace in the API token. You can imagine that any of those APIs/all of them, could break if we stopped including the workspace in that token. We also had over 300 team settings which could differ at the workspace level. This is a setting where for the same setting, you could have different values for each workspace on the org. There was a question of like, how do you rationalize that in one view? Each of these settings would need to be handled on a case-by-case basis basically, where we had product teams deciding what made sense for that setting.

Then, any backend change we made need to be replicated on all three clients. We were essentially doing 4x the work listed above. However, we didn’t want to take this all on at once, that would be crazy. We decided to begin with a prototype. We built a very simple prototype that could basically boot up, send a message, and show the messages in a channel. We began to use this prototype for our day-to-day work. We made it really easy to turn the prototype on and off with a single button. If you ran into an issue, you would just exit the prototype, keep doing your work, mark down that issue somewhere, and then when you had time, go and fix it.

At a certain point, we invited peers to start using the prototype, and they began to give us feedback. A really useful piece of feedback we got here is a focus mode. Some people missed having this per-workspace view, because they wanted to focus on only content from a particular workspace. Maybe they don’t care what’s going on in the social workspace today. This is a view where you can actually filter down your channel list to just channels from specific workspaces in the Slack client in Unified Grid. Eventually, as the prototype matured, we invited leadership to use it. At this point, leadership came on board, and Unified Grid became a top priority in January 2023.

At this point, the core team pivoted from prototyping into attempting to projectize what we were working on, creating resources and tools that other teams could use to help with this large migration. The first tool, which is very exciting for everyone, I’m sure, was a bunch of spreadsheets. We created spreadsheets just listing every API method and permission check and workspace setting, which might need to be audited as part of Unified Grid. As I mentioned earlier, we had data about which APIs were called using a workspace token, and then use that token to route to a particular table. All of those APIs were, of course, fair game.

Then any permission check they ended up making needed to be looked at, too. Any setting they looked at had to be checked, and all the way down the rabbit hole. Once we had these in place, we worked with project managers and subject matter experts to assign them to various product teams. We also created a bunch of documentation to make it easier for these teams to do this migration. We worked out these approaches during the prototyping phase, so that was another really invaluable part of prototyping, was we got a sense of how hard this migration was going to be. We realized that there were actually three primary tactics to migrate an API to be compatible with Unified Grid. This is a little bit of a sidebar, but several years ago, a Slack engineer named Mike Demmer came to QCon, and he spoke about our Vitess migration. He was also the architect of Unified Grid.

The Vitess migration was a change in which we moved away from this per-workspace, per-org sharding model, to a more flexible sharding model. We’re using Vitess, which is this essentially routing layer for MySQL. We could re-shard tables along more sensible axes. For example, we re-sharded our messages table, such that all the messages for a particular team, or a particular workspace, are not on that workspace’s shard. They’re now sharded by the channel ID. It’s now that all the messages for the same channel are on the same database shard. This is a much more sensible sharding strategy for messages, because it’s unlikely that one channel has too many messages for a database shard. You can easily imagine that one workspace has an incredible amount of messages in it.

The good thing about this is that if a table had been re-sharded, such that it no longer depended on the order of the workspace ID, then it didn’t have to change in Unified Grid, because we were already routing based on something that wasn’t going to change, because we were changing this API token from containing the workspace ID to the org ID, and that doesn’t affect how these queries are routed. There’s another class of API which actually requires workspace context. At Slack, every channel is created within a specific workspace. We could have revisited that for Unified Grid, but we decided not to. We decided that that’s still a decent baseline. In the past, the workspace in which to create a channel, would just be determined by the workspace you were currently looking at. If you’re in a workspace, that’s the workspace where you create the channel.

In Unified Grid, of course, there is no current workspace. We made this really simple. We just made this implicit decision explicit, by popping up literally a dropdown menu and having the user pick a workspace when they go to create a channel. Finally, if the API doesn’t fall into either of these buckets, so it’s still sharded by workspace ID or org ID, and it doesn’t require this more explicit context, then we do the strategy I described earlier where we have to check all the users’ workspaces potentially in this potentially expensive manner.

This was obviously a really big change, and with large changes, things can break. We wanted to make sure that we had good test coverage. Over 10 years of Slack existing as a product, we have written thousands of integration tests, probably more. We didn’t want to rewrite all these tests, and we also didn’t want to lose the coverage they provided. What we did is we created a parallel test suite that runs all of these tests, but it automatically switches the workspace context to the org level. The APIs suddenly began to receive an org, and they, of course, all break. This gave us a burndown list, and our product teams fixed them during the migration, which was very kind of them. By the time that we launched, there were actually zero tests failing as a result of this anymore. This allowed us to avoid rewriting our test suite and to still have pretty robust coverage.

Finally, we did some basic things like create easy-to-use helpers just wrapping up common logic. You know how I described earlier this bug in which you could administer a channel only from a workspace where you were an admin, and if you switched to another workspace with access to that channel where you weren’t an admin, you couldn’t administer it? What that means is that in the old Slack client, in Enterprise Grid, you could simply click through your workspaces until you found one where you were an admin, and then you could administer the channel. We do this for you. We just have a helper that says, can the user act as an admin for this channel? It takes the user, it takes the channel, it intersects their workspaces.

If the user is an admin in any of those workspaces, then the answer is yes. With this, we got to do something very gratifying, which was watch a shard go to zero as product teams jumped in and began to burn down these APIs and permissions checks and settings. We began our rollout in September 2023, and we finished in March 2024. We forced an upgrade of the last pre-Unified Grid mobile clients this October, so quite recently.

Takeaways

What did we learn from this whole process? Some of these takeaways will be more mind-altering than others, probably. First off, you should really centralize complexity. You might look at this and say, isn’t this a simpler model? This is our v1 architecture. Isn’t this a lot simpler than this? This seems like a step backwards. I think the counter argument is that we now handle such a broader range of use cases for our customers, and we’ve centralized complexity on the backend. Before, customers and clients and users had to think about things like, what workspace am I in? Now, they don’t anymore. While we’ve made things harder for the backend, we’ve made them simpler for clients.

In fact, in some ways, we’ve made things easier for backends, too, because the server now explicitly has to handle the possibility that something is in an arbitrary workspace. Whereas, before, the current workspace was always implicit in every operation. You can view an action prior to Unified Grid as a function of the current user, the resource, and implicit workspace. Whereas, in Unified Grid, we’ve made that explicit by saying the action is a function of the user and the resource, and that’s it. This could also be an example of explicit is better than implicit.

In terms of efficiency, the fastest API call is one that doesn’t happen. This is pretty anecdotal, but this is the calls required to boot the Slack client for a user in January 2022, prior to Unified Grid, versus January 2024. In Unified Grid, API calls can take somewhat longer, because they need to do more things. You’ll note that we make many fewer API calls. This client.counts API is the API that paints highlights on the sidebar. It figures out which channels have unread things and that sort of thing. We make almost twice as many calls to it priorly for this user, within their enterprise. Then, the boot API, we replaced our boot API, which is called client.boot, with an API called client.userboot. Doesn’t really matter. We make four times fewer API calls in Unified Grid as we made priorly. Even though each of those API calls is a little bit heavier, this is like a massive saving overall.

Also, you should really prototype. Prototyping is a great way to get feedback, to figure out if something is going to be feasible, to work out the rough edges of the UX. To bring it back to our friend Copernicus, his initial model was not so great. It was maybe an improvement on the geocentric model, but it had all kinds of problems. By putting this theory out there, he allowed people like Galileo and Keppler to make significant progress and to eventually make this model become accepted. If you don’t put your big ideas out there, and a great way to put them out there is with a working prototype people can play with, then they’re not going to become reality, ever.

Finally, take a step back and ask the big questions. For example, does the Earth actually orbit around the sun? Also, is our architecture serving us? I think as engineers, we, as I said in the beginning, can be a little averse to change. We like to make small incremental improvements. I think a way in which this manifests is that we often take the status quo that we’ve received as dogma. We say the product behaves like this, there must be a good reason why it behaves like this. We’re using this set of technologies, there must be a good reason why we’re using these technologies.

Often things that made a ton of sense several years ago, have changed for all kinds of valid reasons. When you’re considering how you might improve the architecture of your application just to solve real issues you’re facing, you should be empowered to question these holy cows. To like Copernicus and Keppler, take a step back and say, what is the inherited wisdom we’re just following, and what can we change to make our lives easier? Then, how can we make that change responsibly?

Questions and Answers

Participant 1: After all the lessons you’ve learned from those three models, and the path that you followed to get to where you are right now, is there anything that you’ve learned from failure that you would change?

Hoffman: At the time that Enterprise Grid was built in 2017, it wasn’t like we were unaware that we could have built something like Unified Grid instead. Given what we knew about the way users used Slack, we felt like it wasn’t worth it at the time, and given the pressure we were under as well. I wouldn’t say that we should have gone back and done that differently. I do wonder if we were to do it all over again, if we would consider like having just a more first-class person entity in the way that Discord does. I also think there’s some real advantages of having total separation between data for different customers. Given Slack is so business focused, I don’t know if that ever would have flown. Maybe we would have been bolder a little bit earlier and done this a little bit earlier.

Betts: Do you feel like you had to introduce those epicycles from Enterprise Grid before you realized it was so wrong?

Hoffman: At the time we did Enterprise Grid, it was the pragmatic thing to do. I think in retrospect, it certainly did increase complexity. Also, at the time, at least from a user experience standpoint, the average user was really in just one workspace within their org. It was really hard to justify the investment that reducing that complexity would have taken at the time. Again, maybe we could have started down this path a few years before we did. Though we were doing incremental things, like things like the Vitess migration made this so much easier to do than it would have been in 2017.

Participant 2: Do you do any sharding for the WebSockets connections to make sure that you efficiently push data to all of the relevant WebSockets connections?

Hoffman: I don’t know a ton about how our RTS system works under the hood. There’s a whole team that works only on that. In Unified Grid, we attempt to push most data to the org-level socket only. That means it just gets pushed to one place. For workspaces, we do check whether that workspace is online. The RTS server has an understanding of which workspaces are currently connected. Then we can avoid pushing to workspaces. This is for users. This is how users work. Users can be off or online, and users have their own sockets as well. That’s for users. For workspaces, I think they always get all pushes because they’re always online in some capacity.

Participant 3: Did you face resistance in part of the technical teams? Because I bet you faced them. There are always some of the engineers, maybe the older ones, which are protecting somehow the previous solutions because they feel like it’s theirs, it’s something which works, it’s not worth changing it. Did you face anything similar?

Hoffman: We didn’t face that much resistance from engineering teams within Slack. I think the prototype was a really big reason of why we didn’t. Because people began to use it, and it became something that people liked. One thing I left out of here too is that, every few years, Slack does a total UI revamp, it’s our IA series. We’ve had information architecture 1, and 2, and 3, and now 4. We managed to combine forces with this UI revamp. We said, we’re changing the UI in this fundamental way, let’s use this as a chance to do Unified Grid as well. Once we had both design and product pushing for this, and then also a significant portion of engineering, I think it was pretty easy to get buy-in at that point. There were certainly individual engineers who were like, I don’t want to work on migrating code for three months, but they were overall pretty accommodating about it.

Participant 4: Beyond the requirement from product, what about the financial side? Did the cost go up after changing to the more complex system?

Hoffman: I don’t know. I have not seen overall numbers for this. I think on first principles, we would expect the cost to remain stable or go down, because we do less traffic than we used to do. It’s possible the cost went up somewhat.

Participant 5: I’m curious for the question of, is our architecture still serving us and making decisions for either smaller steps and incremental changes, or big revamps? What information makes these conversations easier?

Hoffman: I think having lots of examples of things where the architecture has made things hard. All the similar bugs we had been fixing around inconsistencies and people seeing actions they could do in some channels and not others and not understanding this. There had been entire projects to consolidate API calls so that we weren’t redoing them for every workspace. All those projects failed because they were so complicated and they didn’t change the overall model. It made them even harder to ship, because you were changing the underlying architecture without changing the user-visible architecture. I think at a certain point, it was like, wouldn’t it be easier to just fix all of this at once? At that point, we were able to get buy-in. In some ways, I think if we hadn’t been running into these issues, it would have been very hard to make a pitch for this.

Participant 5: It’s more of a backward-looking data we have up-to-date.

Participant 6: In your presentation, you showed the older architecture and you just moved to the current one. Did you consider any other architectures? Because when you’re prototyping, you don’t know the outcome, unless with the production data and everything. Some prototypes will work well when there is a smaller subset of data, but when it is production, you have a large subset, sometimes the architectures go sideways. Did you consider any other architectures, like how did we make a decision?

Hoffman: We did. We considered an architecture where actually instead of just doing Unified Grid for the grid, we had user-level Unified Grid. You literally saw everything you could access in one place, whether or not it came from the current enterprise. If I was in a workspace for work and a workspace for my apartment building or whatever, that would all go in one place. A little bit more like the Discord model or something, where it’s one user getting access to everything. We decided that was counter to Slack’s position as an app that’s primarily focused on businesses.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: LLM and Generative AI for Sensitive Data – Navigating Security, Responsibility, and Pitfalls in Highly Regulated Industries

MMS Founder
MMS Stefania Chaplin Azhir Mahmood

Article originally posted on InfoQ. Visit InfoQ

Transcript

Chaplin: Who here is responsible for AI initiatives within their organization? That might be a decision maker, influence, involved somehow? Who is already doing stuff? Has it involved executing, implementing, looking good. Who’s a little bit earlier? Maybe you still have some loopholes, you’re still designing, discovery, thinking about. We’ll try and tailor the talk accordingly.

I am Stefania Chaplin. I am a Solutions Architect at GitLab, where I work with enterprise organizations, predominantly in security, DevSecOps, but also in MLOps as well.

Mahmood: I’m Azhir Mahmood. I’m an AI Research Scientist at PhysicsX. Prior to that, I was a doctoral candidate at UCL developing cutting edge machine learning models. Prior to that, I was in the heavily regulated industry developing my own AI startup.

Chaplin: If you want to follow along with what we get up to, our website is latestgenai.com, where we publish a newsletter with the latest and greatest of everything happening, always from a secure, responsible, ethical, explainable, transparent perspective.

GenAI is Revolutionizing the World

Mahmood: GenAI is revolutionizing the world. It’s doing everything, from hurricane prediction, where NVIDIA were able to forecast Hurricane Lee a week in advance. The team at Isomorphic Labs were able to use AI to predict protein structures. Actually recently, the first ever AI designed drug was developed by the Hong Kong startup, Insilico. It’s currently going through FDA trials. At PhysicsX, we use AI for industrial design and optimization, really accelerating the speed at which we can invent new things, but understanding how AI is really transforming the world of engineering and science. How is it changing the world of business? Allen & Overy, they introduced ChatGPT 4 to 3000 of their lawyers.

Since then, with Microsoft, they developed contract matrix and are using it for drafting, reviewing, and analyzing legal documents. They’ve now launched it to their clients. At McKinsey, Lilli is able to condense all of McKinsey’s knowledge, about 100,000 documents, and allow it to be called within mere moments. That’s really reduced the amount of time associates are spending preparing for client meetings by about 20%. It’s really unlocking a whole new area of creativity for them. Bain & Company, within their recent M&A report, they found about 50% of M&A companies are leveraging AI, especially within the early stages. They’re doing everything from sourcing, screening, and due diligence. Now, having seen how AI can really revolutionize your business?

What Are Highly Regulated Industries, and Types of Sensitive Data?

Chaplin: What are highly regulated industries and types of sensitive data? These are industries that have government regulations, compliance. They have standards and laws, and these are in place for consumer protection, public safety, fair competition, and also environmental conservation. Some examples of the industries. Who here is in finance? Medical or pharmaceutical? Defense, government, and utilities? Anything I didn’t say, anyone missing? I think I’ve got everyone covered. In terms of types of sensitive data, and the important thing when it comes to machine learning and AI, you are only as good as your data, so if you have bad data, how are you going to train your models? Especially when it comes to sensitive data, there are many types, and you’re going to have to treat them differently. For example, maybe you need to start obfuscating or blanking out credit card details. You need to think about your data storage, how you’re going to store it.

Also, types, so things like PII, name, email, mobile, biometrics, religion, government, ideologies, sexual orientation. There’s a lot of different types of PII out there. From an organizational perspective for business, a lot of companies, especially larger ones, highly regulated, are listed. Any information that could influence the stock price will be counted as sensitive data, because if that is leaked or used nefariously, it will have issues. Within health, hopefully you will have come across the HIPAA regulation, which is one of the stricter ones out there in terms of medical records and what to do. Also, high risk. I work with a lot of defense organizations in my experience at GitLab, and it’s very much, we have a low-risk data, medium, classified. We have our diodes for getting the data around. If you’re going to be using sensitive data as part of your machine learning models, you really need to pay attention to making sure it’s safe, secure, and robust.

The AI Legislation Landscape

The legislation landscape, and this has been evolving over the last five, six years. I’m starting off with GDPR. GDPR, if you were working in a customer facing role, it was everything everyone could talk about around 2016, 2017. What are we going to do, personal data? How are we storing it? Is it transferring to our U.S. headquarters? It was one of the first legislations that focused on data. Like I said, good data, good models, good AI. In 2021 we had the AU AI Act proposal. If you’re watching this space, you may have noticed this just got passed into law 2024. This was very much around different risk classifications, so low risk, critical risk. It’s the first time we’re really talking about transparency in legislation. 2023, we had a few things happening. The first one was in D.C. We had the Stop Discrimination Algorithmic Act. Which then evolved to become Algorithmic Accountability Act, which was federal, so across the whole U.S., because what usually starts in D.C. spreads across the U.S, and the world.

With this one, it was very much, we need to understand what AI we’re using, and what are the implications, and what is happening? You see less of the, “It’s a black box. You just put in someone’s name, and it tells you if it’s alone.” No, we really need the transparency and accountability. UK, they’ve taken a bit of a different approach, where it’s very much about the principles. For example, transparency, explainability, responsibility. We’ll be talking about these a bit later. Having fairness, having a safe, secure model. It was very much more focused for innovation and how we use AI.

The final one, United Nations, so this one’s been very interesting. This one was talking about human rights. It was saying, we need to make sure, for one, cease using any AI which infringes on human rights. Also, quotes from the session included, we need to govern AI, not let AI govern us. If you see the way the legislation has evolved from just personal data, transparency, explainability, it’s now become a global human rights issue. What you can see, here we have the map of the world and the vast majority of North and South America, Europe, Asia-Pac, there are legislations either being talked about, being passed. You probably are from one of these countries. It’s also worth noting, where does your company do business? Because a bit like GDPR wasn’t in America, but if you’re an American business doing business with EU, you’re affected. It’s good to keep on top of what the latest and greatest are. I mentioned our newsletter at the beginning.

How to AI?” What is MLOps?

How to AI? What is MLOps? If I was going to ask you, what is an MLOps pipeline? How do you do AI? Who could give me a good answer? I have a little flow, a little bit like a Monopoly board, because you just keep going around. Starting off in the top left and walking around, this is where it’s the data. For example, what is your data source? Is there sensitive data? How are you ingesting that data? Like, we want to look at our financial records, our sales records for this product, or we want to predict customer churn, so we’re going to look at our customer records. That’s great. How are you ingesting or storing that data? Especially with sensitive data, you really need to think securely how you do this. What’s your data? It’s ready, validate, clean it. What are you going to do if there’s null data? What are you going to do if your data’s been manually entered and there are mistakes? How are you going to standardize and curate it? This is majority the data engineer.

What this is meant to show, this flow, is that you have a lot of different people a bit like DevSecOps, really, who are overlapping and getting this done. Now the data is done, then we get our data scientist and ML engineer, so we think about our features. What are we looking at for features, for our ML? Then we have our model. This comes back, should be the final thing on this is our business user, what problems are we solving? Is it regression? Is it classification? Is it anomaly detection? Random Cut Forest is great for that. Once you have your model, train it, validate, evaluate. Ok, we have it ready. We’ve got our data. We’ve got our machine learning. Now let’s do it at scale, so containerize, deploy, make it servable to applications. Then all starts again, because once you’ve got it working once, great, time to keep updating, keep retraining your models.

You can see there are a couple of different users involved in this. Really, when I think of MLOps, you have data, which I’ve spoken about quite a bit. You have your machine learning, something Azhir is very good at. If you only focus on those two, you get back into the problem of, it worked on my machine, which is something that a lot of people have heard from a long time ago in the DevOps world. This is why you need to bring along the best practices from software engineering. Having pipelines, pipelines that can standardize, they can automate, they can drive efficiency. You can add security scanning, pipelines as codes, so that you can do your continuous integration and continuous deployment, so CD4ML. It’s really this overlap, because if you have one data source, one model and one machine, ok great, you’re doing AI. This is very much about how you do enterprise at scale, making sure that is accessible.

When Things Go Wrong

What happens when things go wrong? These are mainly news stories from the last six months, and they cover a few things, so bias, hallucination, and going rogue. We had DPD, January 18th, and a lot of organizations are using a chatbot as part of customer service. Unfortunately for DPD, their chatbot started swearing at the customer and then telling the customer how terrible DPD is as a company. Not ideal, not the type of behavior you’re trying to generate. Bias is a really important area, especially maybe 10 years ago. I got one of my favorite birthday presents I received at the time, a book, “Weapons of Math Destruction.” This is very much talking about the black box. It’s by Cathy O’Neil. It’s great and it’s ok. How do we stop bias, if it’s trained on specific datasets? For example, with image detection, UK passport office exhibited bias based on the color of people’s skin. Hallucinations, I say, a more recent problem. This is a real problem for a few reasons.

A few weeks ago, you had AI hallucinating software dependencies. I’m a developer, and I’m like, yes, PyTorture, that sounds legit. Let me just add that. If I take my developer hat off, I put my hacker hat on, ChatGPT is telling me about a component that doesn’t exist, I know what I’m going to do. I’m going to create that component, and I’m going to put malware in it. Then everyone who uses ChatGPT for development and uses PyTorture, they will be infected. It’s not just dependencies, we can see there was hallucinating a fake embezzlement claim. What was interesting about this, this was the first case of AI being sued for libel. When I speak to organizations who are still earlier in the journey, this is where the lawyers come in and think, what if the AI does something that is suable, or injures someone? This is where the whole ethical and AI headspace comes in. In terms of security, top one, plugins.

Plugins are great, but you need to make sure you’re doing access control. I’ll be talking about this a bit later. Because one of the critical ChatGPT plugins exposed sensitive data, and it was to do with the OAuth mechanism. Who remembers Log4j from a few years ago? Log4j happened just before Christmas, and then, I think this was maybe two years later, PyTorch on Christmas Day, between Christmas Day and the 30th of December, so when not many people are around, there was a vulnerable version of PyTorch. This collected system information, including files such as /etc/passwd and /etc/hosts. Supply chain, this isn’t a machine learning specific vulnerability. This is something that affects all, and what you’ll see in the talk is it’s a lot about the basics. Finally, prompt injection. This is ML specific. What happened to Google Gemini is it enabled direct content manipulation, and it meant that you could start creating fictional accounts, and hackers could see only the prompts that only the developers could see.

Responsible, Secure, and Explainable AI

Mahmood: Now that you know essentially all the problems that can go wrong with machine learning models and generative AI, how do you avoid that? Fundamentally, you want to develop a responsible framework. Responsible AI is really about the governance and the set of principles your company develops when trying to develop machine learning models. There’s a few commonalities. A lot of people focus on human-centric design, fairness and bias, explainability and transparency. Ultimately, you should assure that your responsible AI set of principles align with your company values. What are some responsible AI principles out there? Over here, we’re spanning across Google and Accenture, and you’ll recognize a few commonalities. Say, for example, there is a focus on fairness.

Fundamentally, you don’t want your machine learning model to be treating people differently. There’s a focus on robustness. You want to ensure that the model is constantly working and the architectures and pipelines are functional. Transparency, it should be the case that you should be able to challenge the machine learning model. Every company, from Google and Accenture, seem to have overlaps here. Fundamentally, what you want to avoid is you want to avoid having a situation where you’re developing Terminator, because that doesn’t really align with human values and generally isn’t a good idea. Assuming you want Earth to continue being here.

What are some practical implementations you can use? Be human centric. Ultimately, machine learning, AI, they’re all technologies, and really the importance of it is how actual users experience that technology. It’s not useful if you develop a large language model, and essentially nobody knows how to communicate or interact with it, or it’s spewing swears at people. You want to continuously test. I recognize that the world of AI feels very new, but ultimately, it’s still software, you need to test every component. You need to test your infrastructure. You need to test pipelines. You need to test continuously during deployment. Just because a model may work doesn’t necessarily mean the way it interacts with the rest of the system will get you the result you want. There’s multiple components here.

Also, it’s important to recognize the limitations. Fundamentally, AI is all about the architecture as well as the data. Analyze your data. Identify biases of your model and your data. Make sure you’re able to communicate those biases and communicate the limitations of your system to your consumers and pretty much everyone. Finally, identify metrics. To understand the performance of your model, you need to use multiple metrics, and they give you a better idea of performance. You need to understand the tradeoffs, and thus you’re able to really leverage machine learning to its full capacity.

To summarize, develop a set of responsible AI principles that align with your company values, and consider the broader impact within society. Don’t build something like Terminator. Be human centric. Engage with the broader AI community. Ensure you’re thinking about how people are interacting with these architectures. Rigorously test and monitor every component of your system. Understand the data, the pipelines, how it all integrates into a great, harmonious system.

Now that we understand how to be responsible, how do you be secure?

Chaplin: Is anyone from security, or has worked in security? This might be music to your ears, because we’re going to cover this, and it’s going to cover a lot of the basic principles that are applied across the IT organization. Secure AI, secure and compliant development, deployment, and use of AI. First, I’m going to talk a little bit about attack vectors, popular vulnerabilities, some prevention techniques, and summarize. You can see, this is my simplified architecture where we have a hacker, we’ve got the model data, and we’ve got some hacks going on. Who’s heard of OWASP before? OWASP do a lot of things. Something they’re very famous for is the Top 10 Vulnerabilities. It started off with web, they do mobile, infrastructure as code. They do LLMs as well. This I think was released ’22, or ’23. What you’ll notice with the top 10, some are very LLM specific, for example, prompt injections, training data poisoning. Some are more generic.

In terms of denial of service, that could happen to your models, it can happen to your servers, or your laptop. Supply chain vulnerabilities, so what I mentioned earlier with PyTorch. If I have a look at those, and you have those top 10. We have our users who are actually on the outside. We have different services. We have our data, plugins, extensions, more services. What you’ll notice is there are multiple different attack vectors and multiple places these vulnerabilities can happen. For example, sensitive information disclosure pops up in many times, especially around the data area. We have excessive agency popping up again. In security, you are as strong as your weakest link. You need to make sure every link, every process between these services, users, and data, is as secure as possible, because otherwise you can see, you leave yourself open to a lot of vulnerabilities.

In terms of prevention techniques, so with security, there are a lot of, I call them security basics. For example, access control. Who can do what and how? How are you doing your privileges amongst your users? How are you deciding who can access what data, or who can change the model? Also, you’ve set that up, but what happens if something goes wrong? Monitoring is very important. It actually moved up in the OWASP Top 10. It moved from, I think, number nine to number six, because if you’re not doing logging and monitoring, if/when something goes wrong, how are you going to know? It’s very important, whether it’s for denial of service, for supply chain, for someone getting in your system, to just have logging and monitoring in place. For data specific, you need to think about validation, so sanitization, integrity.

A rule that I do, I worked in security training for a few years, and if everyone just checked that input, are you who you say you are? I know that usually training data comes from here, but are we just verifying that that is who it says it is? That would solve a lot of the different vulnerabilities. Even stuff like upgrading your components, having a suitable patch strategy in place. This is what I meant when I said it’s your security basics, access control, monitoring, patching. If you want to look at a good example of someone who’s got a really good framework, check out Google’s SAIF, Secure AI Framework. You can go online, find it. They’ve got some really useful educational material, because it’s talking about all these concepts. We’re talking about the security foundations, detection and response, defenses, controls, looking at risk and context within business processes.

To summarize, adopt security best practices for your AI and MLOps process. If you’re doing something for your DevSecOps or for your InfoSec, you can probably apply those principles to your MLOps and AI initiatives. I’ve mentioned access control, validation, supply chain verification. Number two, security almost manifesto, you are as strong as your weakest link, so make sure all your links are secure. Finally, check out OWASP and Google SAIF when designing and implementing your AI processes. You can also sign up our newsletter, and we’ll have more information.

Now we’ve talked about secure AI, let’s talk about explainable AI

Mahmood: What is explainability? Fundamentally, explainability is a set of tools that help you interpret and understand the decisions your AI model makes. That’s an overview. Why should you use explainable methods? Number one, transparency. Fundamentally, you should be able to challenge the judgment that your machine learning model is making. Then, trust. It’s great if your machine learning model is a black box, but ultimately, nobody is really going to trust a black box. You want to be able to bridge that barrier. It improves performance, because if you can understand your system, then you can identify components that are weak, and then, as a result, address them. You also are able to minimize risk. By using an XAI framework, it’s the shorthand for Explainable AI, you’re able to also comply with most regulatory and compliance frameworks. That’s a broad overview of its importance. We’re not going to go in too much detail, but here are a few. This is a broad landscape of what the XAI field of research looks like.

Fundamentally, there are model agnostic approaches which treat your machine learning model as a black box and then allow you to interpret that black box. You have your large language model. It’s not initially interpretable. You use a few techniques and you can better understand which components are working and how they’re working, at least. Then there’s model specific approaches. These are tailor-made approaches specific to your architecture. For example, with a model agnostic approach, you could have any architecture. Transformers, those are generally what large language models are. Multiple MLPs, those are quite common deep learning architectures. Then with model specific approaches, it’s specifically tailored to your machine learning architecture or even your domain.

Then, you also have both global analysis, so look at your whole model and attempt to understand it in its completion, and local analysis, which really identify maybe how individual predictions function. Within the model agnostic approach, there’s a common technique, it’s called SHAP. SHAP uses game theory and ultimately helps identify the most important features. You also have LIME. LIME takes in data and then fundamentally looks at how each prediction aligns. It’s really great at identifying for single data points. Then there’s the broader, holistic approach developed by Stanford University, it’s called HELM. It’s actually called Holistic Evaluation of Large Language Models. They look at the whole architecture and have a number of metrics that you can leverage. Then there’s BertViz. BertViz helps you identify the attention mechanism within your model. This is like a whistle-stop tour of explainable AI.

I imagine what you’re probably more interested is how organizations are using explainable AI. In the case of BlackRock, initially they had a black box. The performance of the black box was great. It’s actually pretty superb. What happened was the quants couldn’t explain the decision-making processes to stakeholders or customers, and as a result, that black box model was scrapped. It was then replaced with an XAI process, and that way the decision-making process could be understood. At J.P. Morgan, they’ve heavily invested in XAI. Actually, they have a whole AI institute dedicated to research, which they then integrate. At PhysicsX, we actually leverage domain knowledge. I, for example, may develop an architecture, I get a few results.

Then what will happen is I then communicate with the civil engineer, the mechanical engineer, really understand, does my prediction make sense? Then I leverage that expert judgment to improve my model and understand its failure points. IBM also leverage things called counterfactuals. Say, for example, they might ask a large language model, I have black hair, and it’ll provide some result. The counterfactual of I have black hair is I have red hair or I do not have black hair. That helps you better interpret your model where the data is missing. For example, PayPal and both Siemens have great white papers that really go into details about this whole field.

To quickly summarize, you can use expert judgments and domain knowledge to better interpret and understand the performance of your architecture. It’s a good idea to stay up to date with the latest in research within the field. It’s an incredibly fast-moving field. Make use of model agnostic approaches and model specific approaches. Think about your model in terms of globally, how does the whole architecture work, as well as locally, how does it perform for each individual data point? There’s actually a really great paper by Imperial, called, “Explainability for Large Language Models,” which I imagine many of you might find interesting, and that identifies approaches specific for that domain. Also, you can leverage open-source tools as well as classical techniques. For the case of large language models, there’s BertViz, HELM, LIT, and Phoenix. These are all open-source tools. You can literally go out, download today, and get a better understanding of performance of your model, as well as you can use more classical statistical techniques to understand the input data, the output data and really how things are performing.

The Future of AI

Chaplin: We’ve spoken a little bit about how GenAI is revolutionizing the world, highly regulated industries, sensitive data, regulation, when AI goes wrong. We’ve given you a responsible AI framework that covers responsibility, security, and explainability as well. Let’s talk a little bit about the future of AI. We’re going to take it from two lenses, my view, and then Azhir’s. I come from a cybersecurity background. Maybe I’m biased. How AI can strengthen cybersecurity in the future. Michael Friedrich was talking about AI automated vulnerability resolution. As a developer, vulnerability in my pipeline reflected cross-site scripting. What does that mean? What AI will do, one, it will explain the vulnerability. “That’s what it is. This is what the issue is.” Even better, we have automated resolution. Using GitLab, and there are other tools out there, we will generate a merge request with the remediation. As a developer, my workflow, it’s like, I’ve got a problem. This is what that means. This is the fix. It’s all been done for me. I even have an issue created, and all I need to do is manually just approve.

The reason we have that is because if you have a change, say you’re updating a component from version 2.1 to version 9.1, you might just want to check that with your code, because it’s probably going to introduce breaking changes. That’s why we have the final manual step. That’s from a developer perspective, that’s security. From a more ops perspective, incidence response and recovery. AI is very good at noticing anomalies, and PagerDuty are doing a good job at this, because they can identify something, “Something unexpected is happening. Let’s react quickly.” Maybe we block it. Maybe we’re going to alert someone, because the faster you react the slower the attack vector is.

An example, if anyone remembers the Equifax hack from a few years ago, it was a Struts 2 component, and it took them four months to notice that they had been hacked. You can imagine the damage. It was actually 150 million personal records, which is over double the UK, most of America. This is before GDPR. Otherwise, the fine would have been huge. They lost a third of market cap. This was to do with the attack vector. Finally, NVIDIA, so they are using GenAI as part of their phishing simulation. Using GenAI, you can generate these sandbox cybersecurity trainings, so help to get all of your users as secure as possible. I’m sure everyone’s come across social engineering. My brother-in-law is an accountant, and his whole team are terrified of opening the wrong link and accidentally setting the company on fire. GenAI can really help to speed up these phishing and cybersecurity initiatives.

Mahmood: More broadly, what does the future of AI actually look like? What we’re likely to see is increasing prevalence of large foundation models. These are models trained on immense datasets. They may be then used for numerous domains. They may be multimoded or distributed, but what they’ll be used for is everything from drug development, industrial design. They’ll touch every component of our lives. As they integrate within our lives, we expect AI to become increasingly regulated, especially as they begin to be integrated within health techs, finance, all these highly regulated domains.

Summary

When designing and implementing AI models, think responsibly. Make sure to use a responsible, secure, and explainable framework.

Chaplin: Keep an eye on the legislation and regulations to stay compliant, not only of the country you are based in, but also to your customers.

Mahmood: AI is more than just tech. It’s all about people. It’s how these architectures and models interact with the broader society, how they interact with all of us. Think more holistically.

Chaplin: If you are interested in finding out more, we work with organizations doing MLOps, doing AI design, doing a lot of things, so check out our website.

Questions and Answers

Participant 1: You’re talking about HELM as an explainability framework, but to my knowledge, it’s just an evaluation method with benchmarks on a leaderboard. Can you elaborate a bit on that?

Mahmood: Fundamentally, one way you can think about explainability is really understanding multiple benchmarks. Say, for example, with HELM, I have to read through the paper to make sure I fully interpret everything. If you’re able to understand how maybe changes in your performance, say, for example, you pretrain, you use a different dataset, and then you evaluate it, it gives you more interpretation to your model. That’s a way you can holistically interpret. That’s one way you could leverage HELM.

Participant 1: That doesn’t give you any explainability for highly regulated environments, I think. You compared it with SHAP and LIME, and there you get an explanation of your model and your inference.

Mahmood: Within explainable AI, there’s multiple ways you can think about explainability. There is the framework of interpretability where interpretability is fundamentally understanding each component. I would probably argue, yes, LIME provides you with some degree of interpretability, but explainability is much more of a sliding scale. You have where you might have a white box model, where you understand every component, where you might understand the components of your intention mechanism. While you can also use more holistic metrics to maybe put up your datasets and understand that. Those could be applied to less regulated domains, but still relatively regulated domains. You would, of course, use HELM with other explainable frameworks. You shouldn’t be relying on a single framework. You should be leveraging a wide approach of tools. HELM is one metric, and you can leverage numerous other metrics.

Participant 1: Shouldn’t you promote more the white box models, the interpretable ones, above the explainability of black box models, because LIME, SHAP are estimations of the inference. With the interpretable model, you know how they reason and what they do. I think in a highly regulated environment, it’s better to use white box models than starting with black box models and trying to open them.

Mahmood: The idea is leveraging white box models over black box models, and attempting to interpret black box models. To some degree, I agree. We could leverage white box models increasingly within regulated domains, but what you end up finding is we sacrifice a great deal of performance by leveraging a white box model. It’s not necessarily easy to integrate a white box model into a large language model, that’s still an area of research. You do lose some degree of performance.

Then, within the black box models, yes we have great degrees of performance. Fundamentally, say, for example, in medical vision, it makes more sense to use a CNN, because using a CNN, you’d be able to detect cancer more readily. Using a white box model, yes, it’s interpretable, but it might be the case it’s more likely to make mistakes. As a result, what you’d use then is a domain expert with that black box model. I would probably say it fundamentally depends. It depends on how regulated your industry is. How much performance do you need, again, fundamentally in your domain? There’s a whole host of tools out there.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Learning from Embedded Software Development for the Space Shuttle and the Orion MPCV

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Software development is much different today than it was at the beginning of the Space Shuttle era because of the tools that we have at our disposal, Darrel Raines mentioned in his talk about embedded software development for the Space Shuttle and the Orion MPCV at NDC Tech Town. But the art and practice of software engineering has not progressed that much since the early days of software development, he added.

Compilers are much better and faster, and debuggers are now integrated into our development tools, making the task of error detection much easier, as Raines explained:

There are now dedicated analysis tools that allow us to detect certain types of issues. Examples are static code analyzers and unit test frameworks. We have configuration management systems like “git” to make our day to day work much easier.

Raines argued that many things are the same today as they were when they started writing software for the Space Shuttle. One of the best ways to detect software problems is still with a thorough code review performed by experienced software engineers, he said. Many defects will remain latent in the developed code until we hit just the right combination of factors that allow the defect to show itself. It is imperative to use all the different testing methods available to us to find bugs and defects before we fly, he added.

Raines mentioned that there is one important thing about their software that is very different than most other embedded software:

We cannot easily debug and fix software that is deployed in space! We continually remind ourselves that any testing and debugging that we do on the ground could potentially save a crew when we get to space.

He mentioned that software developers engage with astronauts at many levels during their work. They discuss requirements with astronauts, and talk about how much of a workload they want and how much they can handle. This evaluation allows them to decide on the level of autonomy that the software will have, as Raines explained:

We spend time thinking about how astronauts would recover from various faults. We determine how the harsh environment of space may affect our software in ways that we don’t even have to think about with ground computers.

The hardware used for the major programs is very often generations behind what we have on our phones and on our home computers, Raines said. The software has to be very efficient because they continually struggle with the CPU being saturated. They also run into problems with the onboard networks running out of bandwidth.

C/C++ is the most common computer language used because of its efficiency. Modern compilers help make C code relatively easy to write and debug, Raines said. Since C has been around for a long time, it is well understood and highly optimized on most platforms. There are also spacecrafts that have used Fortran (Space Shuttle flight computers) and Ada (Space Station onboard computers).

The impact of what language is used is a major factor in how to develop and test the code. C/C++ will allow you to do “dangerous” things within the code, as Raines explained:

Null pointers are a constant worry since we have to use them sometimes instead of references.

The most noticeable impact on development is that they need to perform multiple levels of testing on their code, Raines said. They start with unit tests, followed by unit integration tests, then full integration testing, and finally formal verification tests. Each level of testing tends to find different kinds of defects in the software, Raines mentioned.

The impact of failed code can sometimes be a loss of crew or a loss of mission, Raines said. This will weigh heavily on our decisions about how much testing to do and how stringently to perform those tests, he concluded,

InfoQ interviewed Darrel Raines about software development at NASA.

InfoQ: How have changes in the way software development is being developed impacted the work?

Darrel Raines: All of the tools that are available these days make it much easier to concentrate on the important task of making the code work the way we intend it to work.

The adage that the “more things change, the more they stay the same” is an important concept in my job. I am always willing to try new technology as a way of advancing my ability to develop software. But I remain skeptical that the “next big thing” will really make a big difference in my work.

What usually happens is that we make gradual changes over the years that improve our ability to do our work, but we remain consistent with the principles and techniques that have worked for us in the past.

InfoQ: What makes spacecraft software special?

Raines: One example I use with my team is this: if my computer locks up on my desktop, I can just reset the computer and start again. If we lose a computer due to a radiation upset in space, we may not be able to reestablish our current state unless we plan to have that information stored in non-volatile memory. It is a very different environment.

The astronauts, as educated and trained as they are, cannot debug our software during a flight. So we have to be as close to perfect as we can prior to launching the vehicle.

It may mean the difference between a crew coming home and losing them. This difference is what makes spacecraft software special. This is what makes it challenging.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Snowflake vs. MongoDB: Which Data Platform Stock is a Better Pick? – April 15, 2025

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

We use cookies to understand how you use our site and to improve your experience.

This includes personalizing content and advertising.

By pressing “Accept All” or closing out of this banner, you consent to the use of all cookies and similar technologies and the sharing of information they collect with third parties.

You can reject marketing cookies by pressing “Deny Optional,” but we still use essential, performance, and functional cookies.

In addition, whether you “Accept All,” Deny Optional,” click the X or otherwise continue to use the site, you accept our Privacy Policy and Terms of Service, revised from time to time.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


InfluxData rolls out InfluxDB 3 to power real-time apps at scale – Blocks and Files

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

InfluxData has released InfluxDB 3 Core and Enterprise editions in a bid to speed and simplify time series data processing.

InfluxDB 3 Core is an open source, high-speed, recent-data engine for real-time applications. According to the pitch, InfluxDB 3 Enterprise adds high availability with auto failover, multi-region durability, read replicas, enhanced security and scalability for production environments. Both products run in a single-node setup, and have a built-in Python processing engine “elevating InfluxDB from passive storage to an active intelligence engine for real-time data.” The engine brings data transformation, enrichment, and alerting directly into the database.

Paul Dix, InfluxData
Paul Dix

Founder and CTO Paul Dix claimed: “Time series data never stops, and managing it at scale has always come with trade-offs – performance, complexity, or cost. We rebuilt InfluxDB 3 from the ground up to remove those trade-offs. Core is open source, fast, and deploys in seconds, while Enterprise easily scales for production. Whether you’re running at the edge, in the cloud, or somewhere in between, InfluxDB 3 makes working with time series data faster, easier, and far more efficient than ever.”

A time-series database stores data, such as metrics, IoT and other sensor readings, logs, or financial ticks, indexed by time. It typically features high and continuous ingest rates, compression to reduce the space needed, old data expiration to save space as well, and fast, time-based queries looking at averages and sums over time periods. Examples include InfluxDB, Prometheus, and TimescaleDB.

Evan Kaplan, InfluxData
Evan Kaplan

InfluxData was founded in 2012 to build an open source, distributed time-series data platform. This is InfluxDB, which is used to collect, store, and analyze all time-series data at any scale and in real-time. CEO Evan Kaplan joined in 2016. The company raised around $800,000 in a 2013 seed round followed by a 2014 $8.1 million A-round, a 2016 $16 million B-round, a 2018 $35 million C-round, a 2019 $60 million D-round, and then a 2023 $51 million E-round accompanied by $30 million in debt financing.

Kaplan has maintained a regular cadence of product and partner developments:

  • January 2024 – InfluxDB achieved AWS Data and Analytics Competency status in the Data Analytics Platforms and NoSQL/New SQL categories.
  • January 2024 – MAN Energy Solutions integrated InfluxDB Cloud as the core of its MAN CEON cloud platform to help achieve fuel reductions in marine and power engines through the use of real-time data.
  • March 2024 – AWS announced Amazon Timestream for InfluxDB, a managed offering for AWS customers to run InfluxDB within the AWS console but without the overhead that comes with self-managing InfluxDB.
  • September 2024 – New InfluxDB 3.0 product suite features to simplify time series data management at scale, with performance improvements for query concurrency, scaling, and latency. The self-managed InfluxDB Clustered, deployed on Kubernetes, went GA, and featured decoupled, independently scalable ingest and query tiers.
  • February 2025 – InfluxData announced Amazon Timestream for InfluxDB Read Replicas to boost query performance, scalability, and reliability for enterprise-scale time series workloads.

The new InfluxDB 3 engine is written in Rust and built with Apache Arrow, DataFusion, Parquet, and Flight. We’re told it delivers “significant performance gains and architectural flexibility compared to previous open source versions of InfluxDB.” The engine can ingest millions of writes per second and query data in real-time with sub-10 ms lookups.

The Python engine “allows developers to transform, enrich, monitor, and alert on data as it streams in, turning the database into an active intelligence layer that processes data in motion – not just at rest – and in real-time.” This reduces if not eliminates the need for external ETL pipelines.

Both new products fit well with the existing InfluxDB 3 lineup, which is designed for large-scale, distributed workloads in dedicated cloud and Kubernetes environments and has a fully managed, multi-tenant, pay-as-you-go option.

InfluxDB 3 Core is now generally available as a free and open source download. InfluxDB 3 Enterprise is available for production deployments with flexible licensing options. Read more here.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.