Month: December 2024
The companies that had tightly packed the technology they had developed through “patents …
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
The companies that had tightly packed the technology they had developed through “patents” have changed. Open Source, which discloses important technologies and even detailed codes so that anyone can see them, is becoming the basis of a new industry in the software (SW) market.
Open source is free, but many companies are looking for ways to generate revenue through this, creating a business model and even generating revenue.
The U.S. Red Hat is a representative. Red Hat is a global provider of enterprise open-source solutions, including Linux, Cloud, Container, Kubernetes, and more. Red Hat developed an enterprise-class Linux distribution to introduce a subscription model and ensure thorough quality control and long-term support. Later in 1999, IBM loaded Red Hat Linux on corporate servers, and various Wall Street financial institutions also began introducing Red Hat to reduce costs.
Since the first premium enterprise Linux launch in 2002, it became the first open-source technology company to surpass $1 billion in sales in 2012, and IBM acquired Red Hat in 2019 for about $34 billion, the largest in the history of the software company’s acquisition. In particular, Red Hat currently accounts for about 17.5% of IBM’s total actual system sales. This is a significant increase from 9.2% at the time of the acquisition in 2019, meaning Red Hat has a growing share of IBM’s software division.
MongoDB is an open-source database-based cloud service provider. Having grown around the developer community, MongoDB has established a monetization model, providing enterprises including advanced security features, audit functions, and professional support services, operating certification programs.
Since then, it has attracted $150 million from its Series F investment in 2013 and attracted more than 30 customers among the top Fortune 500 companies in 2014. MongoDB’s sales as of the third quarter of 2024 amounted to about 580 billion won.
Companies based on excellent open-source technology are also emerging in Korea. ID Tech company Hopae, which provides digital identity authentication solutions, uses open source as a market entry strategy. The open-source code created by the Hopae team has more than 2 million global downloads and is actively used for various projects.
According to the Software Policy Research Institute, the global open source market is expected to grow from $27.7 billion (about 38.4 trillion won) in 2022 to 75.2 billion dollars (about 104.2 trillion won) in 2028.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Central Florida Lifestyle Magazine
This article was posted online by Central Florida Lifestyle Magazine. Spot On Florida collects excepts of news articles from this source and add these in the ‘Florida Lifestyle’-category.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Those who have been following the latest tech trends will know that two areas that are seeing the biggest developments are fintech and data management. Both sectors have evolved to keep up with modern technological innovations such as smart devices, the Internet of Things, and the rise of artificial intelligence and machine learning.
In the past decade, fintech has seen an explosion in usage as more people use smart technology to manage their finances. A recent fintech report from the World Economic Forum and the Cambridge Centre for Alternative Finance found that customer growth rates averaged above 50%. As more people use fintech, more data is being created that can be used to improve financial services. This has led to more fintech companies using NoSQL databases to keep up with the increasing amount of data that fintech applications collect.
What is a NoSQL Database?
A NoSQL database is different from a traditional SQL database because it is much more flexible in how it stores data. While SQL databases will store and organize data in tables, a guide to MongoDB’s NoSQL databases details how a NoSQL can store data in four different data models. These are document databases, key-value databases, wide-column stores, and graph databases. Each NoSQL database has its own unique features while also being flexible, scalable, and able to distribute data across multiple databases. NoSQL databases also allow developers to store huge amounts of unstructured data. This data doesn’t have a fixed schema and can include text, images, video, and data from social media posts, emails, and smart devices. This allows the database to create massive datasets consisting of different types of data that can then be collated together to find patterns and recommend services. The fintech industry is effectively using these advantages of NoSQL databases.
Data Collection
The fintech industry is constantly evolving in terms of how people can pay for products and services. A recent innovation was the Singapore Quick Response Code. This allowed merchants to receive payments from multiple payment networks and apps at the same time, eliminating the need for multiple QR codes. While these apps are revolutionizing how people pay, they are also changing how the fintech industry stores the users’ financial data. Most of the data from these apps is unstructured, whether it be financial transactions from mobile finance apps or search preferences from web-based solutions, and a NoSQL database can store it on one of its data models, depending on the fintech company’s needs. This is especially useful for fintech applications that deal with analytical and exploratory data, such as risk management, as it allows the application to find patterns.
Fraud Detection
As more people conduct their finances online, the risk of fraud has also increased. NoSQL database systems are better able to detect fraudulent activities than relational databases due to their ability to leverage multiple data sources and perform advanced real-time analytics. A Research Gate paper on fraud detection in NoSQL database systems outlines how these databases can use machine learning to determine patterns pointing to anomalies. For example, a graph database, which is used to find contextual relationships between data points, can uncover discrepancies in a dataset. This is vitally important for fintech applications where only the smallest shifts in client behavior patterns can point to fraudulent activity.
Personalization
As we outlined in our post on the Evolution and Future of Accounting Software, customers are looking for personalized experiences. In fintech, this means a service that doesn’t cater to the wider population but instead offers solutions matching their individual financial needs. Financial organizations that collect a wide range of information about their customers can create comprehensive user profiles of the individual on a NoSQL database. Because a NoSQL database has a very flexible schema, financial companies can collect and store data from multiple sources. This allows them to collate different pieces of a client’s data to recommend the best services for the individual. This information can then be used to provide loans or offer financial services based on the individual’s financial history, risk aversion, and spending habits.
As the fintech industry grows, more fintech companies will use NoSQL databases to efficiently collect client data and use it to prevent fraud and provide personalized services.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Artificial intelligence is the greatest investment opportunity of our lifetime. The time to invest in groundbreaking AI is now, and this stock is a steal!
The whispers are turning into roars.
Artificial intelligence isn’t science fiction anymore.
It’s the revolution reshaping every industry on the planet.
From driverless cars to medical breakthroughs, AI is on the cusp of a global explosion, and savvy investors stand to reap the rewards.
Here’s why this is the prime moment to jump on the AI bandwagon:
Exponential Growth on the Horizon: Forget linear growth – AI is poised for a hockey stick trajectory.
Imagine every sector, from healthcare to finance, infused with superhuman intelligence.
We’re talking disease prediction, hyper-personalized marketing, and automated logistics that streamline everything.
This isn’t a maybe – it’s an inevitability.
Early investors will be the ones positioned to ride the wave of this technological tsunami.
Ground Floor Opportunity: Remember the early days of the internet?
Those who saw the potential of tech giants back then are sitting pretty today.
AI is at a similar inflection point.
We’re not talking about established players – we’re talking about nimble startups with groundbreaking ideas and the potential to become the next Google or Amazon.
This is your chance to get in before the rockets take off!
Disruption is the New Name of the Game: Let’s face it, complacency breeds stagnation.
AI is the ultimate disruptor, and it’s shaking the foundations of traditional industries.
The companies that embrace AI will thrive, while the dinosaurs clinging to outdated methods will be left in the dust.
As an investor, you want to be on the side of the winners, and AI is the winning ticket.
The Talent Pool is Overflowing: The world’s brightest minds are flocking to AI.
From computer scientists to mathematicians, the next generation of innovators is pouring its energy into this field.
This influx of talent guarantees a constant stream of groundbreaking ideas and rapid advancements.
By investing in AI, you’re essentially backing the future.
The future is powered by artificial intelligence, and the time to invest is NOW.
Don’t be a spectator in this technological revolution.
Dive into the AI gold rush and watch your portfolio soar alongside the brightest minds of our generation.
This isn’t just about making money – it’s about being part of the future.
So, buckle up and get ready for the ride of your investment life!
Act Now and Unlock a Potential 10,000% Return: This AI Stock is a Diamond in the Rough (But Our Help is Key!)
The AI revolution is upon us, and savvy investors stand to make a fortune.
But with so many choices, how do you find the hidden gem – the company poised for explosive growth?
That’s where our expertise comes in.
We’ve got the answer, but there’s a twist…
Imagine an AI company so groundbreaking, so far ahead of the curve, that even if its stock price quadrupled today, it would still be considered ridiculously cheap.
That’s the potential you’re looking at. This isn’t just about a decent return – we’re talking about a 10,000% gain over the next decade!
Our research team has identified a hidden gem – an AI company with cutting-edge technology, massive potential, and a current stock price that screams opportunity.
This company boasts the most advanced technology in the AI sector, putting them leagues ahead of competitors.
It’s like having a race car on a go-kart track.
They have a strong possibility of cornering entire markets, becoming the undisputed leader in their field.
Here’s the catch (it’s a good one): To uncover this sleeping giant, you’ll need our exclusive intel.
We want to make sure none of our valued readers miss out on this groundbreaking opportunity!
That’s why we’re slashing the price of our Premium Readership Newsletter by a whopping 70%.
For a ridiculously low price of just $29, you can unlock a year’s worth of in-depth investment research and exclusive insights – that’s less than a single restaurant meal!
Here’s why this is a deal you can’t afford to pass up:
• Access to our Detailed Report on this Game-Changing AI Stock: Our in-depth report dives deep into our #1 AI stock’s groundbreaking technology and massive growth potential.
• 11 New Issues of Our Premium Readership Newsletter: You will also receive 11 new issues and at least one new stock pick per month from our monthly newsletter’s portfolio over the next 12 months. These stocks are handpicked by our research director, Dr. Inan Dogan.
• One free upcoming issue of our 70+ page Quarterly Newsletter: A value of $149
• Bonus Reports: Premium access to members-only fund manager video interviews
• Ad-Free Browsing: Enjoy a year of investment research free from distracting banner and pop-up ads, allowing you to focus on uncovering the next big opportunity.
• 30-Day Money-Back Guarantee: If you’re not absolutely satisfied with our service, we’ll provide a full refund within 30 days, no questions asked.
Space is Limited! Only 1000 spots are available for this exclusive offer. Don’t let this chance slip away – subscribe to our Premium Readership Newsletter today and unlock the potential for a life-changing investment.
Here’s what to do next:
1. Head over to our website and subscribe to our Premium Readership Newsletter for just $29.
2. Enjoy a year of ad-free browsing, exclusive access to our in-depth report on the revolutionary AI company, and the upcoming issues of our Premium Readership Newsletter over the next 12 months.
3. Sit back, relax, and know that you’re backed by our ironclad 30-day money-back guarantee.
Don’t miss out on this incredible opportunity! Subscribe now and take control of your AI investment future!
No worries about auto-renewals! Our 30-Day Money-Back Guarantee applies whether you’re joining us for the first time or renewing your subscription a year later!
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ
Researchers from InstaDeep and NVIDIA have open-sourced Nucleotide Transformers (NT), a set of foundation models for genomics data. The largest NT model has 2.5 billion parameters and was trained on genetic sequence data from 850 species. It outperforms other state-of-the-art genomics foundation models on several genomics benchmarks.
The InstaDeep published a technical description of the models in Nature. NT uses an encoder-only Transformer architecture and is pre-trained using the same masked language model objective as BERT. The pre-trained NT models can be used in two ways: to produce embeddings for use as features in smaller models, or fine-tuned with a task-specific head replacing the language model head. InstaDeep evaluated NT on 18 downstream tasks, such as epigenetic marks prediction and promoter sequence prediction, and compared it to three baseline models. NT achieved the “highest overall performance across tasks” and outperformed all other models on promoter and splicing tasks. According to InstaDeep:
The Nucleotide Transformer opens doors to novel applications in genomics. Intriguingly, even probing of intermediate layers reveals rich contextual embeddings that capture key genomic features, such as promoters and enhancers, despite no supervision during training. [We] show that the zero-shot learning capabilities of NT enable [predicting] the impact of genetic mutations, offering potentially new tools for understanding disease mechanisms.
The best-performing NT model, Multispecies 2.5B, contains 2.5 billion parameters and was trained on data from 850 species of “diverse phyla,” including bacteria, fungi, and invertebrates as well as mammals such as mice and humans. Because this model outperformed a 2.5B parameter NT model trained only on human data, InstaDeep says that the multi-species data is “key to improving our understanding of the human genome.”
InstaDeep compared Multispecies 2.5B’s performance to three other genomics foundational models: Enformer, HyenaDNA, and DNABERT-2. All models were fine-tuned for each of the 18 downstream tasks. While Enformer had the best performance on enhancer prediction and “some” chromatin tasks, NT was the best overall. It outperformed HyenaDNA on all tasks, even though HyenaDNA was trained on the “human reference genome.”
Besides its use on downstream tasks, InstaDeep also investigated the model’s ability to predict the severity of genetic mutations. This was done using “zero-shot scores” of sequences, calculated using cosine distances in embedding space. They noted that this score produced a “moderate” correlation with severity.
An Instadeep employee BioGeek joined a Hacker News discussion about the work, pointing out example use cases in a Huggingface notebook. BioGeek also mentioned a previous Instadeep model called ChatNT:
[Y]ou can ask natural language questions like “Determine the degradation rate of the human RNA sequence @myseq.fna on a scale from -5 to 5.” and the ChatNT will answer with “The degradation rate for this sequence is 1.83.”
Another user said:
I’ve been trialing a bunch of these models at work. They basically learn where the DNA has important functions, and what those functions are. It’s very approximate, but up to now that’s been very hard to do from just the sequence and no other data.
The Nucleotide Transformers code is available on GitHub. The model files can be downloaded from Huggingface.
AWS Adds News Amazon Q Developer Agent Capabilities: Doc Generation, Code Reviews, and Unit Tests
MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
AWS recently announced several enhancements to its generative AI-powered assistant, Amazon Q Developer, introducing new agent capabilities to streamline software development processes.
Amazon Q Developer, released in general availability earlier this year, is a generative AI–powered assistant for designing, building, testing, deploying, and maintaining software across integrated development environments (IDEs).
Channy Yun, a Principal Developer Advocate for AWS cloud, writes:
Amazon Q Developer has agents that can generate real-time code suggestions based on your comments and existing code, bootstrap new projects from a single prompt (/dev), automate the process of upgrading and transforming legacy Java applications with the Amazon Q Developer transformation capability (/transform), generate customized code recommendations from your private repositories securely, and quickly understand what resources are running in your AWS account with a simple prompt.
Now, the company has added several enhancements for the agent that include:
- Automated Documentation Generation: Developers can automatically generate comprehensive documentation, such as README files and data flow diagrams, directly within their integrated development environments (IDEs). This functionality reduces the time spent on manual documentation, allowing developers to focus more on coding and design.
- Automated Code Reviews: Amazon Q Developer can perform code reviews by analyzing codebases to detect issues related to code quality, security vulnerabilities, and adherence to best practices. The tool provides immediate feedback and suggests fixes, enhancing code reliability and accelerating the review process.
- Automated Unit Test Generation: The new capabilities enable the automatic generation of unit tests, improving test coverage and ensuring code robustness. By identifying and creating relevant test cases, Amazon Q Developer assists in maintaining high-quality code standards throughout the development lifecycle.
(Source: AWS News blog post)
Note that other offerings to Amazon Q, such as GitHub Copilot, GitLab, and JetBrains AI Assistant, offer code documentation, review, and unit test generation capabilities.
Ant Stanley, an AWS Hero, posted on Bluesky:
I want to check out the new Amazon Q Developer capabilities, particularly those related to code review and document generation, but Amazon Q’s lack of support for Zed probably means I’ll never get around to it.
Developers can leverage the new capabilities through commands within supported IDEs, including Visual Studio Code and IntelliJ IDEA. These features are available with Amazon Q Developer’s Free Tier or Pro Tier subscriptions.
Lastly, the new Amazon Q Developer agent capabilities for software development are now available in all AWS Regions where Amazon Q is available.
Presentation: Bits, Bots, and Banter: A Deep Dive into How Tech Teams Work in a DevOps World
MMS • Brittany Woods
Article originally posted on InfoQ. Visit InfoQ
Transcript
Woods: What I’d like to explore is how teams are working in a DevOps world. We all know the best practices that have been shared throughout everyone’s journey really into DevOps and what the ideal implementation looks like. What’s not really written very often is what teams are actually doing. Having worked across several companies in various roles and in various stages of maturity, I wanted to take that and share what I’ve seen across those roles: some things that I’ve witnessed to work, some things that I’ve witnessed that didn’t work. I’d like to think of myself as well-versed in the DevOps movement, with that varied experience. The reality is that every company is different.
Their needs are different, so their practices are going to look different. You’re going to hear a lot of opinions and a lot of stories that may contradict the things that I’m about to tell you, and that’s ok. The key takeaway is more so that DevOps is a flexible methodology, and it can work regardless of the industry that you’re in. Do I have opinions on what the best way is? Yes, we all do.
I’m a senior engineering manager with The LEGO Group. Primarily, I’m focused on building out their platform team that’s responsible for their e-commerce platform, lego.com. Before that, I worked for an American tax company. I was responsible for server automation, SRE, stuff like that. Then before that, I was an IC. I was a DevOps engineer, an automation engineer, and built out these practices from an engineering perspective.
DevOps, Abridged
What I really wanted to do was just start from the beginning. I mentioned that DevOps is a flexible methodology, and so everybody’s taken the bits and bobs that work for them out and implemented those. I wanted to level set on where I’m coming from with this, just so you were aware. The name of the game is to do this through a blend of collaboration and automation, and redefine what you consider a team. That’s how you really implement DevOps. You redefine what those ownership lines look like in the past. Through all of this change, you shift to using more of those methodologies among your teams, and that’s going to dictate the change, really, that you have to have in your culture.
Culturally speaking, you have to think of things as more of a shared responsibility model. You also have to start thinking about things from a team health and delivery perspective, either through the enablement or automation, or some combination of the two. That also means that lines are going to start to blur. Lines between teams are going to blur between DevOps, or as you matured in your DevOps practices. You have to either build, lead, or steer your organization in a direction that’s going to withstand that type of change. As I heard with ClearBank’s talk, a point that rang absolutely true was that you have to understand that this happens iteratively and not overnight. There’s only so much change in an organization that you can pump in in a really rapid timeframe.
Remember that this is continuous. This is continuous improvement that I’m talking about here, and that’s what I’m considering to be the standard that I’m going off of for this talk. You also have to start prioritizing education and learning, because DevOps teams need to be broader in their knowledge. I’ll talk more about that later, whenever I talk about the dreaded T-shaped engineering. Just know that DevOps teams have to be broader in their knowledge. Then, the plus side to all of this is that, in theory, all of this focus on culture and learning pays off in spades, because you start to see the efficiencies that DevOps brings.
You’re probably also going to ask how this helps teams. This help comes in the form of autonomy. In my opinion, autonomy is like the biggest measure of increased efficiency when you do DevOps and do it well. Through prioritizing learning and facilitating that cultural change, you’re opening teams up to be able to control that destiny. Those historical siloed specialties don’t exist whenever you do this correctly. Then, done incorrectly, you also have to be aware that new silos can form. The advertised improvements of frequently shipping code and changes and iterative development, they happen. They happen when you’re not mature, and they happen when you are mature. Keep that in mind.
Then we’ll talk more about some measurement in some later slides where we talk about ways that I’ve seen to effectively measure teams’ maturity. Then the last thing that I want to level set on with my point of view of DevOps is how I view SRE and platform engineering. Some have said that platform engineering and SRE is the new DevOps. It’s something new. It’s different. Has nothing to do with DevOps. I think a bit differently on this topic. I think that DevOps is a methodology, an umbrella methodology, and that what we’re seeing with site reliability engineering and platform engineering is actually those more specialized team frameworks that are allowing you to address some of the problems that people were seeing as they implemented DevOps.
Maybe an iteration, but I think it’s still all part of the DevOps movement. While DevOps is pushing to address the way that your technology organization operates, as a whole, SRE and platform engineering is helping to define how those traditional operations teams go from doing the thing to enabling the thing. Also wanted to call out that with the change to DevOps, it’s all teams that are changing, not just development teams. I mentioned the shift from doing to enabling, that happens on the operations side.
DevOps – What It Isn’t
I want to show some opinions about what I don’t think DevOps is. The first one is, an alternative for standardization. A pitfall that I commonly see whenever teams are doing DevOps, or whenever they’re moving along their DevOps journey, is they want to enable that autonomy I talked about as being one of the biggest measures. In doing that, they want teams to control their own destiny. In controlling that destiny, they promote, you can use whatever you want, however you want, to get the job done. I agree that that is important, but there is a way that you can empower teams to do things autonomously while still having guidelines or something in place.
You can do this through tooling, process, permissions, automation, some combination of all of these things, but the absence of them causes chaos. That’s one important thing that I don’t think DevOps is. The next thing is a replacement for ops. We have that. There’s a thing that replaces ops, and it’s called NoOps, but it’s not called DevOps. We’re not going to be talking about NoOps. Then there’s two more here. It’s not a single team or a job title. This is a bit of a controversial opinion. I mentioned earlier that DevOps is interpreted in many different ways. A bit of a running line among DevOps practitioners and leadership is that DevOps is a methodology, not a job title.
There are some companies that they’ll have DevOps engineers, and they’ll have DevOps teams for whatever reason, either it’s because that’s their interpretation, or because they needed to create a new job title for pay bands or whatever, whatever they had to do to attract new talent in the DevOps world. There’s nothing against and no shame in having those. My point here is more that it’s just important to remember that DevOps is a collective effort and not a set of teams or engineers. It’s many teams, many engineers across an organization.
Then last one, you can’t purchase DevOps, which means DevOps is not a tool. There are many tools that will help you on your journey to achieving DevOps, and those tools have largely been considered by the industry, DevOps tools. There’s not a single tool set that we’re going to talk about today, or probably ever, that is going to give you successful DevOps out of the bag, or in the can, or however you want to say it. You can use any number of those tools in different combinations. In order to have a successful practice, you have to think about the things that I mentioned earlier, like culture and responsibility lines, focus on learning, appetite for change, all of those less tangible than a tool things.
What’s a Healthy Team?
I also keep referring to a team. There are a few facets I consider for healthy teams. I just wanted to talk about those. This is particular to healthy DevOps teams, that’s going to fall outside of the more traditional metrics that we’ll talk about later. The first one is that I’m a firm believer in having a psychologically safe environment. The TLDR here is, no brilliant jerks. What I mean by that is we have this long-held myth in engineering that it’s ok to not have good communication skills, that, as I was an engineer, that seemed to proliferate the organization. That it’s ok that Beth or Bob don’t have great communication skills, because look at all the things they’re doing for the team. Look at all the engineering effort they’re putting forward.
The problem with that is, if you have a brilliant jerk on your team, generally speaking, you’re creating a non-psychologically safe environment for everyone else on that team, so they’re not going to feel safe sharing their ideas or opinions. You’re going to lose that potential for innovation that you would have typically got from them. They’re going to shut down, probably in meetings, and feel like they aren’t going to be heard. They’re not going to feel like they have a space to safely learn. It’s important to shut that down so your team can continue to tout the experience that they have as a collective group. I also consider a healthy team autonomous.
Healthy teams should be able to use the tools, patterns, and practices that are in place to do what they do with the guardrails set out for the organization. If they need to build infrastructure, they should be able to do that. If they need to manage that infrastructure, they should be able to do that in a way that doesn’t give them extra or additional overhead. If they need to deploy code, they should be able to do that without waiting a week for an approval or waiting for somebody to look at it. They should be able to use the practices and patterns in place to safely deliver that to end users or customers or whoever their stakeholders are, by themselves, without having to consult with multiple other teams. Then, engineers should be empowered to solve tough engineering problems without cognitive overload. Really, this can be interpreted in many different ways, but what I specifically mean is twofold.
First of all, you can’t have an everything team. We’ll talk a bit about what everything teams look like. There should never be a team that is a catchall or a dumping ground for things that don’t fit anywhere else. This is going to inherently lead to that cognitive overload, and teams being too overwhelmed to solve the tough problems that you would like for them to use their big brains to solve. Second, just because a team is on board to solve those tough problems, do not try to add the overhead of, if you’ve touched it, you’ve owned it. What I mean by that is DevOps and having healthy teams is about facilitating innersourcing. DevOps helps you use innersourcing to be able to source the best ideas from across the organization. If you’re fostering a culture of, you’ve touched it, you’ve owned it, nobody is going to want to help solve those tough problems or collaborate in those ways.
I think it’s also important for teams to have a clear and understood remit. Basically, what I mean by this is, if teams don’t have a clear understanding of where they fit into an organization, they’re not set up for success. They feel demotivated. Their morale suffers. Sometimes this can be simply by teams owning too much, so those everything teams I talked about. Or when a remit is incredibly wide or too wide, that cognitive overload of trying to do everything for everyone on everything just becomes too much. It’s also important for teams to understand not the things that they’ll be focused on or working on in their day to day, but also how that plays into the bigger picture for your organization.
How does that roll up to the strategic plan for your company? I have a story here. I have a mentor. I meet with her once a month. We used to meet much more often, but she’s in America, and I’m here. She shared with me the importance of this big picture, we call it, of this showing a team where they fit into the organizational plan, showing them the things that they’re working on matter. Before she talked about it in that way, I never really, truly thought about it. I was like, we all know that our work is helping the company. That’s what we’re here for. That’s why they sign our paycheck. It’s more than that. It’s, it shows a team where their impact is, especially for the backend engineering teams or the operations teams that they don’t really deliver new shiny features to production that a customer uses in a platform.
Showing them, this is how you’re actually helping us achieve that strategic direction, is incredibly important. Consistent work rituals is also a sign of a healthy team. I wanted to call this out simply because I think it’s important for teams to have a routine and understand the expectations laid out for them, and understand when things are going to be happening to them or with them. In my team right now, we do the standard agile practices. We have epic refinement sessions at the same time, biweekly. On the off weeks, we do backlog review. We do sprint planning and sprint retro at the same time every two weeks. We do our standups every morning. This consistency has really helped bring consistency to the team’s work, to their understanding of delivery, to their understanding of our objectives, and has helped with delivery.
It also becomes more important to keep this consistent when you start actually gathering metrics for these teams. We’ll talk about that more. Then, finally, on the topic of measurement, make sure you’re being transparent. Several talks have talked about the importance of transparency. As a leader, I advocate for being as transparent as possible in every scenario. The last talk talked about being transparent about promotion cycles and process. I think you should do that. You should also be transparent about the measures that the team is being held to, the measures that you yourself are being held to, and have them understand what is actually expected of them, and what things are you looking for to gauge that.
How Are Teams Really Working?
We’re not here for all of the background stuff. We’re actually here to talk about how teams are really working today. I mentioned that I have experience across different industries and different levels of DevOps maturity, which means that I have insights into a couple different ways that teams are working today. This isn’t just going to be a reflection of how my team today at The LEGO Group is working. It’s going to be a collective reflection of all of the types of teams I’ve worked in. I’ll call out some pros and cons of all of them. I also want to highlight that as we go through these, I’ll talk about the ways that we’re most successful either by helping the teams, by helping our delivery, or the best-case scenario, having the best of both worlds.
Team Structures
Team structures is the first thing I want to talk about. There’s three of these that I’m going to take you through. The first structure I want to talk about, I’m calling the traditional structure. Even when doing DevOps, some teams have stayed in the structure. More than likely, you have product squads full of developers that are rolling up to some VP somewhere, or some director somewhere. Then you have an IT team that’s completely separate, a different part of the organization. Whenever you look within, so within your IT department, for example, that’s where I have the most experience, because that’s where I’ve worked most of my career. You’ll have IT engineers. You’ll probably have a cloud team. You’ll have the network engineers, and all of the things in between.
The obvious benefit with the traditional structure was always that there were really clear lines of who owned what, but it also had really deep silos, because everyone was largely encouraged to stay in their lane. Even with that move to DevOps, some organizations chose to keep this model and roll with this structure. What they did was turned those specialized functions into more consultative roles. What that looks like is essentially, say you have a network engineer. Those network engineers would consult with the product squads on the things that they needed to build a network in the cloud, for example, and help enable those product squads, ultimately leaving the product squad to own that long term.
What this did, through this lending out of expertise, those product squads gain knowledge of how to autonomously support their application or their product while taking that, do the right thing first or do the right hard thing first approach, because they’re leveraging the expertise of that part of their organization. The downside of this model is it’s traditional, first of all, so you can see how silos could potentially form. As a leader or really as an organization, you have to be really clear on what you hope for these teams, because it’s super easy to fall back into the old traditional processes and practices where all these teams in the middle own all of the bits, and the development teams build the bits. Because you’re still in that structure, facilitating that can become quite hard.
The next structure I want to talk about, I’m calling single team DevOps. On the surface, it looks a lot like what I just showed you. I just changed the words. Then when you zoom out, it looks a lot different. All of those specialty areas, gone. You have a DevOps team full of DevOps engineers. From those DevOps engineers, everybody needs them. Everybody wants a piece of what they’re doing. Everyone needs their support. They’re usually equally responsible for anything from delivery questions to network questions. They’re probably considered owners of lots of things, but specifically they’ll be owners of infrastructure more than likely. That can be actual containers, that can be the processes, that can be the entirety of your cloud environment. It can be any of the above. As the owners of those things, they’re going to be on the hook for requests from auditors, for requests for compliance.
Again, requests coming from all sides. These are the everything engineers I was talking about earlier. They’re very tired. Remember how we talked about team health? This model doesn’t really promote any of those health markers. It doesn’t advocate for the avoidance of cognitive overload. Having that broad and that much weight on a single team’s shoulders makes every day feel like just staying above water, if you can. Innovation is not happening in this model. These teams are often going to see tremendous amounts of churn, in my experience. You’ll just see engineers cycle through. They’ll burn out usually, or they’ll move to other teams that have a more clear remit.
Ultimately, this team feels a bit like a dumping ground for everything on all the sides. They’re the keepers of that infrastructure. They’re the experts in DevOps for your organization. They’re the admins of the environment, which means that they’re the bottleneck for a lot of requests. It’s also worth mentioning that I’ve never seen this model work at scale ever. In all of the organizations that I’ve worked with, ones that start with this usually transition to something else. Keep in mind the organizations that I’ve worked with have been on the larger side. They’re probably not like massive Microsoft, Google, but they’re big.
This probably works slightly better whenever you have less demand on those DevOps engineers time with smaller teams and smaller spaces. I would still say that for the long term, this model doesn’t really work. There’s always going to be too much to know. There’s going to be too much to learn, too much to manage, and no clear way to really build or empower teams around you, so you just continue to perpetuate that bottleneck behavior.
Then the last one that I want to talk about is empowered DevOps. I’m calling it peak autonomy as well, and we’ll talk about why. In this model, you have your product squads. You’ll notice in the middle you now have platform engineers and site reliability engineers. With this model, so your IT department is no longer made up of specialty areas, but it’s also not made up of just a DevOps team doing all of the things for everyone anymore, either. The platform engineers, or the intent behind the platform engineers, is to focus on building platforms and patterns of practice that can be used by the product squads to enable them to do the things they need to do. That’s those guidelines and guardrails I was talking about.
Through this enablement, you achieve the autonomy of those product teams in a safe way. I also want to add the caveat that you can have several platform teams in an organization. I know platform is singular, but you can have multiple platforms. There’s likely never going to be a single platform that you provide for product squads, because just as you had many concentrations in the traditional model where you had that network expertise, you’re also going to need to provide platforms to accomplish some of those tasks, whether that’s modules in Terraform, for example, or something else. You’re also going to have site reliability engineers as part of this. This too can have many different approaches depending on who you are, what company you’re in, who you talk to.
For the sake of the day, we’re just going to say that these SREs are responsible for the reliability of the services broadly. These SREs are going to ensure a solid mean time to acknowledge incidents. They’re going to make sure that your mean time to resolve those incidents is generally getting better. Then promote practices across the product squads that bring more reliable services. Things like observability, things like defining error budgets, SLIs and SLOs, all of the things that you do as an SRE. I want to put it simply, these SREs are there to help promote operational maturity across teams. They can do that in any number of ways. They can be embedded or they can be more consultative.
Both models work, it just depends on how your teams are set up. Inversely, in this model, the product squads have a bit of a different role to play. Given the focus on autonomy, the product squads have the tools and patterns that they need that have been provided by the platform engineers to effectively build and own and then also maintain the applications from end-to-end. Both the traditional model and the empowered model promote that ownership of environments, living within the teams that built the application. This model is a bit more structured than the traditional method, though, and it promotes that centralized enablement coming from those platform teams.
It promotes those standards and that safe environment to learn, that we mentioned before. It also promotes having a collective and unified approach, so you’re not adding cognitive overhead to the product squads. We don’t want to just move the problem from one spot with everything team in the DevOps engineers and go put it on the developers. That’s not the intent. You have to have enablement material to make this work.
My sales pitch, why do empowered DevOps? This is the model that I’ve seen be most effective, pure and simple. It generally provides those key non-technical pillars that we talked about of healthy teams, like autonomy, the reduction of cognitive overload, while also allowing for that faster delivery, faster cycle times for teams. It puts the tech in the hands of the people that need it most, and that can impact change to it the quickest. You’re allowing for that quicker iteration. Then also giving that platform mindset. It allows product squads to focus on what matter to them and not have to deal again with that cognitive overload and overhead of owning everything. They have things in place to help them own the new things that are coming to them, and a safe space to learn how to do that.
T-Shaped Engineering
Then I wanted to talk about the dreaded T-shaped engineering. We probably all heard it. Maybe it was a couple years ago. Maybe it was yesterday, but we’ve all heard it come up. As an engineer myself, I know the skepticism that comes with the connotation around T-shaped engineering. As a leader, I think it’s important to make it clear that you’re not expecting everyone to learn to be an expert in everything. With the shift to this model, there’s going to be a bit of a learning curve, whether you’re learning the platforms provided by the platform engineering teams, or whether you’re learning how to go from building and doing to supporting. It’s all about learning a new engagement model and a new way of thinking.
This is also an illustration of something I talked about earlier, which is making that shift into an investment of learning. It’s changing your culture to put value on that learning and growing, while you ensure you maintain a balance with that cognitive load on teams. Basically, this doesn’t mean that you’re an expert in everything. It actually means that you learn to be aware of how to use the tools around you, while broadening your understanding. You’re broadening your understanding, not your expertise, in wider areas.
How Are Teams Collaborating?
The next thing I want to talk about is how teams are collaborating, and talk about what’s working for them. There’s a lot of tools to help with collaboration, I’m sure you’re all aware. Some companies have multiple chat tools, for example, but the collaboration tool is the new DevOps tool. There is no single set that’s going to give you the perfect collaboration for teams. With the new ways of working that come out of COVID and all that we’ve dealt with over the last four years now, these practices have morphed into something different than what we knew to be true back in 2019. I’ve worked in teams using a full Atlassian suite. I’ve worked in teams that are using everything in Azure.
In both cases, the biggest thing was, teams need a way to give real-time feedback in the written form. They need integrations to be able to automate their processes and tie them together. They need pipelines for delivery that encompasses all of the checks and things that are important that we’ve talked about already. Teams that have these methods have been the most successful in being able to deliver quickly, safely, and with the least customer impact. This is also how teams stay connected. Another buzzword that happens a lot is having a connected culture, or a one team approach, or whatever your company has branded as their hashtag for the day. It doesn’t have to rely solely on in-person experiences.
Many of us are working in global organizations. I’ve only ever worked in global organizations. That in-person team camaraderie is still important. There’s still value in that. More so, building relationships in real time with everyone is key. Tools aren’t necessarily going to give you that. What you should do and focus on is build something stable and consistent into your culture that’s going to give you those things, that’s going to give you that collaboration. Set communication standards with your team. Enable teams in other ways. Find ways to have fun with your whole team remotely, for example. That’s what’s going to build collaboration in your team.
When we talk about collaboration across multiple teams, there’s many avenues that you can take here. Fireside chats is a good one, just to empower experts within your organization to use a forum to share what they know and what they’ve learned. While I was at H&R Block, I founded something called Block Bits. Block Bits was, basically, we do two lightning talks, so we would have 30 minutes. It happened the second Wednesday of every month. You could sign up to do a lightning talk and just present, this is a cool thing that I learned, did, or my team built. It gave a place for engineers to get in front of the wider engineering community.
Usually, we had attendance between 300 and 500 engineers. It gave them that broader audience. Even though they didn’t realize at the time, it built connections. It also started other teams thinking, I can solve the problem in the same way that you did, and I wasn’t thinking about it that way. That’s a good way to collaborate across teams when you have a very big engineering organization. I’m also a big advocate for community user groups. I started my automation engineering career long ago in the configuration management space. Whenever I was trying to introduce configuration management and automation to an organization, I leaned really heavily on these community user groups. We had a Chef user group. We were using Chef at the time.
Basically, what this did was it gave users a way to feel like they had ownership in the thing we were doing. It gave the adoption of that platform a little bit of a kickstart. It was also a really good forum to understand what the actual challenges those users were facing, so then we could build solutions for that into the platform itself. You can also encourage gatherings like engineering clubs to form. We have so many clubs within The LEGO Group. Most recently, a backend engineering club was founded, and all of the backend engineers get together and share solutions and problems that they’re facing. It’s a really good way to build camaraderie across your engineering organization.
Then, workgroups. I look at workgroups as a way to do important things for the organization that maybe would get deprioritized somewhere else, but also as a way for people to stretch outside of what they’re doing every day, if they want to learn something or do something new. Because it’s a safe environment for them to be able to pick something up.
There are also ways that I’ve found to improve collaboration within an individual team. I mentioned earlier the importance of routine. The typical team rituals that take place that I talked about, giving them a forum for sharing and a way to include each other in their work. In the case of retros, I view them a bit team therapy, bonding experience. It depends on the week. As long as you foster that culture of sharing, and as long as your engineers feel safe in doing so, these retros are a really good place for them to be real with each other, for them to express things that are going good, things that are going well, things that maybe didn’t go so well over the last two weeks. If there’s any kind of stress or underlying tension, usually we can work that out in a retro. From these sessions, we also do another thing, MOB sessions, where the team gets together to solve hard problems.
Again, it’s about building that camaraderie and trust within the team to promote finding that right solution, but it also helps promote the culture of learning that I talked about earlier. Within LEGO, we all have cute names, and then we have real names, and my team’s cute name is Houston. Not for, Houston, we have a problem, though, sometimes that is the case. We do Houston meets. Anyone on the team can join. It’s in our group channel. We’ll just start a meeting and we solve hard problems. We share what we’re working on. We use it as a sounding board or a rubber duck session, but it gets the team solving the problem together, and then allows them to learn from each other. I should also mention, these aren’t planned. They’re entirely informal. The team is empowered to say, does anyone want to jump on a Houston meet? They’ll just start a meet.
Measuring Success
The next section and the last section actually is measuring success. There’s a lot of metrics that I’m going to talk about, not necessarily measuring lines of code, but a lot of ways that you can measure the impact of the capabilities of a team. As a leader, I found that taking this more outcome and capability approach is a better indicator of my team, the teams that I’ve been in, of their health and how they’re impacting the organization as a whole than those typical productivity markers. There’s also something to be said about ensuring capacity trends stay on target and all of that. I’m not actually going to talk about those, because we all know that that’s something that you do as part of planning.
Each type of metric that I’m going to talk about is slightly different depending on the focus area. I couldn’t talk about metrics and about DevOps without talking about DORA metrics. We’ve all heard them. We probably all have OKRs about them. This has been the long-held standard and way to determine DevOps maturity. They’re going to help you understand how often your code is shipping. The time it takes a developer to get it from their laptop to a production environment, live. The quality of those changes, and the impact of errors on your customers.
Then how quickly you can recover after failure. For platform teams, these could be good markers for how well their enablement is working, because you should start to see these metrics get better if you’re enabling teams in the right way. As they’re all, in a way, measures of autonomy, these are also really good indicators of whether there’s adequate enablement to allow for those faster delivery times, or it will point to if you have any bottlenecks.
The next set is an extension of DORA, and that is the SPACE metrics, or the SPACE framework. Basically, this framework goes a step further and measures more of those soft areas of maturity, where the DORA metrics are more focused on the delivery piece. With satisfaction, this measure can really show anything from developer platform engineering job satisfaction to something like satisfaction with the platform, if we’re talking about what’s useful for a platform engineering team, for example. How useful is that thing that you built? Performance and activity, they all tie back with traditional measures like velocity and delivery.
While they may not be alone, the best indicator that a team is delivering value, combined into something like the SPACE framework, you can start to see a clear picture of the impact that that team is having, and whether or not those focus areas are working for the larger group. Community is also a good way to measure the impact of that internal collaboration that I talked about, and to make sure that you’re focused on the right things, building environments that are inclusive for everyone. Then, measuring if people are talking and working together to solve hard problems. Are those solutions being spread across the organization? Are they just landing and sitting in a single team? Those are all important things to know. Then, finally, evolution, again with autonomy. This is a great measure of autonomy because you want to ensure that the change is happening over that period of time or moving in the right direction.
On platform metrics, after all of what I just talked about, I have one, and it is adoption. If you build a platform that nobody adopts, did you really build a platform? It’s like if a tree falls in a forest and nobody hears it, did it really make a sound? You have to be focused on building something that’s useful for the teams that you’ve built it for. You have to do user research. You have to understand what their challenges are. Adoption is the single most important metric for a platform engineering team to understand if their platform is useful.
Then with SRE teams, we all probably, if you’ve worked in the SRE space, have heard of the Golden Signals. Those are things a SRE team should be promoting to product squads. It’s also a way to measure the impact of that enablement that they’re providing, whether they’re embedded SREs or whether they are more consultative. I talked earlier about there being many ways to do SRE, of those two models, I see these measures as helpful targets for enablement on that consultative model that I talked about. Even in the embedded approach, having this data, understanding latency, traffic, error, saturation, and seeing how those trends change over time when you’ve sent these SREs out to help enhance the operational maturity of your teams, seeing those over time is really important.
It’s also important to understand your incident response times. This ties really back to those DORA metrics and that MTTR that was at the end there. In my opinion, this is one of the best measures of operational maturity, because it tells you how quickly you can solve something, and, by extension, how many handoffs it takes to actually solve the problem. Because if that number is 5, for example, it would be better at 1. It would be better if the first team that touched it could solve it. Uptime is also really important. You want to ensure that as your teams are helping product squads focus on increasing their reliability, or enhancing their reliability, that they’re focused on improving the uptime of their platform.
Then the final one is practice maturity. Doing maturity models of your product squads is really important for an SRE team, whenever you get started, to understand where they’re starting. What practices do they have that are operationally mature already, and what practices need your help? Then you can see that heat map and focus in on the areas that’s going to provide value to that product. Measuring that over time is a really important practice, just to ensure that you’re taking them on the right journey.
Then there’s team health. Within The LEGO Group, I was introduced to team health checks. We do these quarterly. You can do them more often if you want. Generally speaking, quarterly gives you enough time to be able to make improvements between health checks. Essentially what these are, we use our agile coaches, and they put together a two-hour session where we get together and talk about problems within the team, problems with the ways of working that we’re following, any other challenge that we’re facing really. We talk about them. We come up with some actions to solve them.
Then we try to change them in between. It gives teams an area that’s safe to express, this is how I think this team could be better. It drives forward making the team better, by somebody other than just yourself. Then the last one is a 360 review process, or anonymous feedback for teams. The last talk talked about the importance of doing real-time feedback and not relying on the end of year process to get that feedback. I also think that goes both ways. While I’ll give continuous feedback to the team, so they understand where we’re at, so I’ve level set with them, I also want them to give feedback to me. What things am I doing that you don’t find helpful? What things could I be doing that you would find helpful? Doing that consistently throughout the year is really important to having operationally mature teams as well. Just using that as a system of measure for me has been really effective.
See more presentations with transcripts
Deno 2 Released, Focuses on Interoperability with Legacy JavaScript Infrastructure and Use at Scale
MMS • Bruno Couriol
Article originally posted on InfoQ. Visit InfoQ
The Deno team recently released Deno 2. According to the team, Deno 2 provides seamless interoperability with legacy JavaScript infrastructure, a stabilized standard library, a modern registry for sharing JavaScript libraries across runtimes, and more.
Deno 2 touts backward compatibility with Node and npm. The release note explains:
Deno 2 understands
package.json
, thenode_modules
folder, and even npm workspaces, allowing you to run Deno in any Node project using ESM. And if there are minor syntax adjustments needed, you can fix them withdeno lint --fix
.
The aforementioned compatibility enables teams to incrementally adopt Deno and its all-in-one toolchain. Deno developers can import npm packages via the npm:
specifier:
import chalk from "npm:chalk@5.3.0";
console.log(chalk.blue("Hello, world!"));
Developers can also leverage import maps to set a bare specifier for their npm package:
{
"imports": {
"chalk": "npm:chalk@5.3.0"
}
}
The module can then be used with its bare specifier:
import chalk from "chalk";
console.log(chalk.blue("Hello, world!"));
Deno 2 also claims to support a large list of commonly used web frameworks (e.g., including Next.js, Astro, Remix, Angular, SvelteKit, QwikCity).
Deno 2 additionally supports dependency management with deno install
, deno add
, and deno remove
. The latter two commands respectively add and remove packages from a package.json
file.
The Deno Standard Library is now stable and included in Deno 2. It consists of dozens of audited utility modules covering data manipulation, web-related logic, JavaScript-specific functionalities, and more. Developers can review the complete list of modules from the standard library on Deno’s JavaScript Registry (JSR), an open source JavaScript registry that embraces ESM (JavaScript native modules), and natively accepts TypeScript packages.
The release note explains the benefits of JSR:
It supports TypeScript natively (you can publish modules as TypeScript source code), handles the module loading intricacies multiple runtimes and environments, only allows ESM, auto-generates documentation from JSDoc-style comments, and can be used with npm- and npx-like systems (yes, JSR turns TypeScript into
.js
and.d.ts
files, as well).
Deno also supports workspaces (also known as “monorepos”) to manage multiple related and interdependent packages simultaneously. Deno workspaces support using a Deno-first package from an existing npm package, easing the migration from npm workspaces,
Developers can install the production release from dotcom–2.deno. Developers are encouraged to review the original release note, which includes the full list of features, improvements, and bug fixes. Deno is open-source software that is available under the MIT license. Contributions are encouraged via the Deno Project and should follow the Deno contribution guidelines.
Flutter 3.27 Promotes New Rendering Engine Impeller, Improves iOS and Material Widgets, and More
MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ
The latest version of Google’s cross-platform UI kit, Flutter 3.27, brings a wealth of changes, including better adherence to Apple’s UI Guidelines thanks to a number of improved Cupertino widgets, new features for CarouselView
, list rows and columns, ModalRoutes
transitions, and so on. Furthermore, the new release makes the Impeller rendering engine the default, with improved performance, instrumentation support, concurrency support, and more.
Cupertino is a collection of widgets that align strictly with Apple’s Human Interface Guidelines. Flutter 3.27 updates a few to increase their fidelity, including CupertinoCheckbox
, CupertinoRadio
, and CupertinoSlidingSegmentedControl
. It also extends CupertinoCheckbox
, and CupertinoSwitch
to make them more configurable, and brings CupertinoButton
on a par with the latest customizability options introduced in iOS 15. Other improvements affect CupertinoActionSheet
, CupertinoContextMenu
, CupertinoDatePicker
, and CupertinoMagnifier
.
On the front of Android-native Material UI, Flutter 3.27 extends CarouselView
with CarouselView.weighted
, which allows you to define more dynamic layouts using the flexWeights
parameter to specify the relative item weight occupied within the carousel view. Additionally, SegmentedButton
can be aligned vertically and a few widgets have been fixed to align better with Material 3 specifications.
Flutter 3.27 also improves ModalRoutes
, text selection, and rows and columns spacing. ModalRoutes
, which have the peculiarity of blocking interaction with previous routes by occupying the entire navigator area, now enable the exit transition from a route to sync up with the enter transition of the new route, so they play nicely together. Text selection now supports Shift + Click gesture to move the extent of the selection to the clicked position on Linux, macOS, and Windows. Rows and Columns may use a spacing
parameter which makes it easier to offset them from each other.
After over one year in preview, the new Impeller rendering engine has been promoted to default on modern Android devices, replacing the old one, Skia. Skia is still available as an opt-in in case of compatibility issues. Impeller attempts to do at compile times a number of tasks that Skia does at runtime, such as building shaders and reflections and creating pipeline state objects upfront, and improves caching to make performance more predictable. It also improves debug support by labeling textures and buffers and being able to capture animations to disk without affecting rendering performance. When necessary, Impeller can distribute single-frame workloads across multiple threads to improve performance.
Going forward we will continue to make improvements to Impeller’s performance and fidelity on Android. Additionally, we intend to make Impeller’s OpenGL backend production ready to remove the Skia fallback.
Other improvements worth mentioning are improved rendering performance on iOS, support for the Swift Package Manager, as well as edge-to-edge and freeform support for Android. Do not miss the official announcement to get the full detail.
Podcast: Key Trends from 2024: Cell-based Architecture, DORA & SPACE, LLM & SLM, Cloud Databases and Portals
MMS • Daniel Bryant Thomas Betts Shane Hastie Srini Penchikala Ren
Article originally posted on InfoQ. Visit InfoQ
Transcript
Daniel Bryant: Hello and welcome to the InfoQ podcast. My name is Daniel Bryant. I’m the news manager here at InfoQ and my day job, I currently work in platform engineering at Syntasso. Today we have a treat for you as I’ve managed to assemble several of the InfoQ podcast hosts to review the year in software technology, techniques and people practices. We’ll introduce ourselves in just a moment and then we’ll dive straight into our review of architecture, culture and methods, AI and data engineering, and the cloud and DevOps. There is of course no surprises that AI features heavily in this discussion, but we have tried to approach this from all angles and we provided plenty of other non-AI insights too. Great to see you all again. Let’s start with a quick round of intro, shall we? Thomas, do you want to go first?
Thomas Betts: Yes, sure. I don’t think I’ve done my full introduction in a while on the podcast. My day job, I’m an application architect at Blackbaud, the number one software provider for social impact. I do a lot of stuff for InfoQ, and this year a lot of stuff for QCon. So I’m lead editor for architecture design, co-host of the podcast obviously, I was a co-chair of QCon San Francisco this year and a track host for QCon London, so that kind of rounds out what I’ve been up to. Next up, Shane.
Shane Hastie: Thanks, Thomas. Really great to be here with my colleagues again. Yes, Shane Hastie, lead editor for culture and methods. My day job, they call me the global delivery lead for Skills Development Group. Written a couple of books deep into the people and culture elements of things, the highlights for this year. Unfortunately, I didn’t get to any of the QCons, I’m sad about that, and hopefully next year we’ll be there in person. But some of the really amazing guests we’ve had on the Engineering Culture podcast this year have been, we’ll talk a bit about it later on, but that’s probably been some of my highlights. Srini.
Srini Penchikala: Thanks, Shane. Hello everyone. I am Srini Penchikala, my day job is, I work as an application architect, but for InfoQ and QCon, I serve as the lead editor for data engineering and AI and ML community at InfoQ. I also co-host a podcast in the same space, and I am serving as the programming committee member for QCon London 2025 conference, which I’m really looking forward to.
Real quick, 2024 has been a marquee year for AI technologies and we are starting to see the next phase of AI adoption. We’ll talk more about that in the podcast and also I hosted a AI/ML Trends podcast report back in September, so that will be the biggest reference that I’ll be going back to a few times in this podcast. Definitely there are a lot to look forward to. I am looking forward to sharing in today’s podcast the technologies and trends that we should be hyped about and also what is all the hype that we should be staying away from. Next, Renato.
Renato Losio: Hi everyone. My name is Renato Losio. I’m an Italian cloud architect living in Germany. For InfoQ, I’m actually working in the Cloud queue, I’m an editor. And definitely my highlight of the year has to be being the chair of the first edition of the InfoQ Dev Summit in Munich. Back to you, Daniel.
Daniel Bryant: Fantastic. Yes, I’ve already enjoyed the Dev Summit. I was lucky enough to go to the InfoQ Dev Summit in Boston. I’ve worked in Boston for a number of years or worked with a Boston-based company and the content there was fantastic, staff plus in particular sticks in my mind. I know we’re going to dive into that, but I also did the platform engineering track at QCon London this year, which is fantastic. Great Crossing paths with yourself, Srini, I think I’ve met almost everyone this year. Maybe not yourself, Shane, this year. Actually, I can’t remember exactly when, but it’s always good to meet in person and the QCons and the Dev Summits are perfect for that kind of stuff, and I always learn a ton of stuff, which we’ll give the listeners a hint at tonight, right?
Did our software delivery trends and technology predictions from 2023 come true? [04:10]
So today as we’re recording, so I just want to do a quick look back on last year. Every year we record these podcasts, we always say we want to look back and say, “Hey, did our predictions come true?” And when we pulled out this time for last year, we said, “2024, could the use of AI within software delivery becoming more seamless and increasing divide between organizations and people adopting AI and those that don’t, and a shift towards composability and improved abstractions in the continuous delivery space”. So I think we actually did pretty well this year, right? I’m definitely seeing a whole lot of AI, as you hinted at, Srini, coming in there and as you say, every year, Thomas, the AI overlords have not stolen our jobs yet. So not completely in terms of software engineering.
Thomas Betts: I think we’re all still employed, and I’m surprised that quote in there about the separation, I think that’s true. We’re seeing the companies that are doing the innovator early adopter are still doing new things. I think the companies that are more late majority are like, “We want to use it”, but they’re not quite sure how yet. I don’t know if, Srini, you have any more insight into how people are adopting AI?
Srini Penchikala: Yes, that’s very true, Thomas. Yep, definitely AI is here, I guess, but again, it’s still evolving, right? So I definitely see some companies are going full-fledged, some companies are still waiting for it to happen. So as they say that the new trend, and I will talk more about this later in the podcast, the agentic AI. The AI agents that cannot only generate and predict the content and insights, they can also take actions. So the agentic AI is a big deal. So as they say, the AI agents are coming, so whether they’ll overtake our jobs or not, that’s to be seen. But speaking of the last year’s predictions, we had talked about the shift towards AI adoption, right? Adoption has been a lot more this year, but I think we still have some areas where I thought we would be further ahead and we are not. So it’s still evolving.
Shane Hastie: Yes. I see lots and lots of large organizations that are not software product companies putting their toes in and bumping against privacy, security, ethics and not sure how to go forward knowing that they absolutely need to, and often that governance frame slowing things down as they’re exploring, “Well, okay, what does this really mean for us?” And a lot of conservatism in that space.
Daniel Bryant: It’s really funny, Shane, compared to what Renato and I are seeing. So I went to the Google Cloud Summit in London and I only heard AI, AI, AI, AI. If you listen to the hype, and I think, Renato, you covered re:Invent for us recently. I think if you sort listen to the hype, you believe everyone, even the more late adopters are covering AI, Renato, right?
Renato Losio: Yes, I mean, just to give a order of magnitude, I don’t know if they changed the number during the conference, but at a conference like the re:Invent, there were over 800 session about AI/ML. By comparison, there were just about 200 about architecture and even less about serverless. So that gives a bit of direction where the conference was going.
Surprisingly, the itself was not so generative AI focused. They tried to make it difference, probably go back to that later on, but I find it interesting what happened in the last year in the space of AI cloud. But I don’t take responsibility for the prediction of last year because I was not there. But I have to admit that I love to start with looking back at the prediction actually, when I see tech prediction for 2025, actually, I tend to go to the bottom of the article, use it as a reference to the article of the year before because I to see a year later what people predicted if it still does. I love to go back to those topics.
What are the key trends in architecture and design? [08:08]
Daniel Bryant: That’s awesome, Renato, and you will get the privilege this time at the end to talk about your predictions, right? So we’ll hold you to it next year. So enjoy it for the moment, but I think that’s a perfect segue, you mentioned serverless there. Architecture and design is one of our sort of marquee topics. To be fair all the things we’re going to talk about today are marquee topics, but we often look to the InfoQ through an architect lens. And Thomas, you sort of run the show for us there. I know you’ve got a bunch of interesting topics that you wanted to talk about. I think the first one was around cell-based architectures, things like that.
Thomas Betts: Yes, so this was I think something we highlighted in the A&D trends report back in April I think it was, and we ended up having an e-mag come out of it and had a lot of different topics. Some of those were from various articles or presentations at QCons. And just this happens in architecture that we have these ideas that have been around for a while, but we didn’t have the technology to make it easy to implement. And then it becomes easier and then people start adopting it. So the idea of putting all of the necessary components in one cell, and so you minimize the blast radius.
And if one goes down, the next cell isn’t affected, and how do you control it and how do you maintain it? Just like any microservices, distributor architecture, there’s extra complexity involved, but it’s becoming easier to manage, and if you need the benefits, then it’s worth the trade-offs. You’re willing to take on the extra overhead of managing these things. So like I said, that e-mag is full of a lot of different insights, different viewpoints. Again, architects always looking at different viewpoints and ways to look at a problem, so I like that it gives a lot of different perspectives.
Daniel Bryant: We’ve got to named check Rafal [Gancarz] there on that one just quickly, and Rafal did a fantastic job on that.
Thomas Betts: Yes, thanks for calling out Rafal. He did fantastic doing that. The other thing that I remember talking about a few people at QCon London this year, and I think QCon London last year as well, was the green software trends. So fantastic book just came out in April. I think the book signing was the release party at QCon London.
Daniel Bryant: Oh, yes, it was Anne Currie and team, Yes, fantastic.
Thomas Betts: Anne Currie, Sara and Sarah. Sara Bergman and Sarah Hsu were all there together and they actually credited InfoQ with being the force that made the book happen because that was how they were able to all get together and collaborate. So that book about Building Green Software, Adrian Cockroft has talked about this. He’s kind of championing it from here’s how you do it in the cloud, going back to serverless, he advocates make things serverless, make them small, only running when you need it. That kind of philosophy, I think we’re going to start seeing architects have to start thinking about that more and more. It’s going to become more important.
The point that I love about, and Sara, I had a great presentation on this, is that it just makes more efficient software. It makes better software, it makes more sustainable, more maintainable software, all the other abilities we look for, if you build with a green mindset, you get all those other benefits. So if you just say, “Oh, we need to make it green, we need to reduce our carbon footprint, and nobody really cares about that”. Well, all the other things you do care about, they come along for the ride. So start thinking about that way. How do you only run your software when you need to? How do you only write the code you need? So there’s a lot of ideas in there and I think we’re going to start seeing more of that hopefully in the next year. That’s definitely one of my 2025 predictions.
Renato Losio: I think you really raised a good point about software in the sense that it’s seen as well as a proxy to other things like cost for example. I think if you go to your CFO and you say, “We are trying to make our cloud deployment greener”. Probably he won’t care that much. Even if outside the company you might sell that message that you’re generating less CO2 realities. Often it’s really a proxy on cost optimization. When you talk about serverless run just what you need or whatever, choose region that are more carbon effective often are as well the cheapest data center because they’re more efficient. So it’s an interesting approach to looking through the lens of green and not just cost or architecture.
Thomas Betts: Yes, I know Anne has talked a lot about how one of the only measures we have sometimes is what’s our bill? What’s the AWS bill that we get or Azure bill? And if that’s the only measure you have, then use it, that’s better than nothing and it is actually still a pretty good proxy. The cloud providers are getting better at saying, “Here’s how that energy is produced”. But you have to dig for it. I think we’re going to start seeing that become more of a first class, “Here’s your bill and then here’s the carbon footprint of this because you ran it in this data center, which is in US East and it’s running on coal versus somewhere in Europe that’s on nuclear”. Right? So I think that’s going to be interesting to see if we can get to those metrics and people say, “Oh, we can run this somewhere else because there’s a better carbon efficiency and we save a lot of money by doing it”.
Srini Penchikala: All of that is important, I agree with you both the green software and the sustainable computing is a very good time to talk about it in the AI context because as we all know, the large language models that are the main part of GenAI, they do require a lot of processing power. So we have to be more conscious about how much are we really spending and how much value we are getting, right? So between the value of these solutions and what are we really spending in terms of money and energy resources and the environment, right? So how about you Shane? What do you think about green?
Shane Hastie: Green software, ethics and sustainability have been a drum I have wanted and have beaten for the last three years, and it’s been great to see more and more the ability to have those hard conversations. And the challenging within organizations where as software engineers, as the development community, we can actually start to ask for, “Hey, we want to do it this way”. And now as Thomas so smoothly says, we can actually use money as a good excuse, and that helps because without showing the measurable benefits, we’re going to struggle.
Thomas Betts: And Srini, you brought up the AI component and yes, the AI carbon footprint is enormous. Somebody will say it’s the next Bitcoin, it’s just spending a lot of money, but hopefully it’s producing value. The other aspect I thought was interesting, and this was a presentation at QCon San Francisco, was how GitHub Copilot serves 400 million requests per day, and it got into the details of how you actually have to implement AI solutions. So GitHub Copilot, two things. There’s GitHub Copilot Chat, that’s where you open a window. You ask a prompt, it responds, right? It’s like ChatGPT with code, but GitHub Copilot the autocomplete, it’s kind of remarkable because it has to listen for you to stop typing and then suggest the next thing.
And so all of the complications that go underneath that that I hadn’t considered, so they had to create this whole proxy and it’s hanging up HTTP two requests, and if you just use native things like engine X after a hundred disconnects, it just drops the connection entirely and so you’ve lost it. There’s all these low level details, and I think when we get to see AI become more standard as parts of big software, more companies that have these distributed systems are going to run into some of these problems. Maybe not to GitHub Copilot scale, but there’s probably these unintended things that are going to show up in our architecture that we haven’t even thought of yet. I think that’s going to be real interesting to see how AI creates the next level of architectural challenges.
Srini Penchikala: Also, maybe just to add to that, Thomas, maybe the AI can also solve some of those challenges, right? We talk about AI hallucinations and bias, but can AI also help minimize the environmental hallucinations and the ethical biases
Daniel Bryant: Said like a true proponent, Srini, the cause of and solution to most of the world’s problems, Ai, right? I love it.
Thomas Betts: Yes, I think we’re going to see how are we going to use AI as an architect? Can I use it in my day job? Can it help me design systems? Can it help me solve problems? I use it as the rubber duck. If I don’t have someone else that I can get on a call and chat with and discuss a problem, I’ll open up ChatGPT and just start a conversation and say, “Hey, I’m trying to come up with this. What are some of the trade-offs I should consider?” I haven’t gone so far as to say, “Solve this problem”. The hallucination may be completely invalid or it may be that out of the box thinking that you just hadn’t thought of yet, it’s going to sound valid either way. You still have to prove it out and make it work.
What are the key trends in culture and methods? [16:22]
So I think the other part of being an architect, I talked about using AI to do your job, but I think the architectural process has been a big discussion this year. All of the staff plus talks at QCons are always interesting. I think we have good technical feedback, but people love, I personally love the how do I do my job? How do I get better at my job, how do I level up? So we’ve seen some of that in decentralizing decision making. I talked to Dan Fike and Shawna Martell about that. They gave a presentation, they wrote up an article based on that presentation. And, Shane, I can’t remember if you talked to them as well or you talked to somebody else about how to do a better job. How do you level up your staff plus, how do you become a staff plus or principal engineer?
Shane Hastie: Yes, I’ve had a number of people on the podcast this year talking about staff plus and growth both in single track and dual track career paths. The charity majors still going, the pendulum back and forward. When do you make your choices? How do you make your choices? Shameless plug, I had Daniel as a guest talking about people process for great developer experience, technical excellence, weaving that into the way that we work. So all of these things leveraging AI, challenging the importance of the human aspect, that critical thinking, and Thomas, you made the point, the hallucination is there.
Well, one of the most important human skills is to recognize the hallucination and not go down that path to utilize the generative AI and other tools at your fingertips most effectively. Platform engineering, the myth still of the 10 Xx engineer, but with great platform engineering, what we do is we make others 10 times better and there’s a lot of the, I want to say same old people stuff still coming up because fundamentally the people haven’t changed.
Daniel Bryant: That’s a good point.
Shane Hastie: Human beings, we don’t evolve as rapidly as software tools, digging into product mastery. Really interesting conversation with Gojko Adzic about observability at the customer level and bringing those customer metrics right to the fore. So Yes, all of the DORA metrics and all of these others still really important. I had a couple of conversations this year where we’ve spoken about DORA’s great, and it is becoming a target in and of itself for many organizations, and that doesn’t help if you don’t think about all of the people factors that go with it.
Thomas Betts: There’s a law that once you have a named goal, it stops being a useful metric, I’m botching the quote entirely. But it’s like once you have that, it was useful to measure these things to know how well you’re doing, but then people see it as that’s the thing I have to do, as opposed to, “No, just get better and we’ll observe you getting better”.
Daniel Bryant: Goodhart’s law, Thomas.
Thomas Betts: Yep, thank you.
Shane Hastie: And W. Edwards Deming, if you give a manager a numerical target, they will meet it even if they have to destroy the organization to do so.
Thomas Betts: You mentioned the DORA metrics. Why I loved the QCon keynote Lizzie Matusov gave was talking about the different metrics you can measure for psychological safety and how to measure team success. And that’s how you can say, are these teams being productive? And there’s different survey tools out there and there’s different ways to collect this data, but I think she focused on if people feel they’re able to speak up and raise issues and have open conversations, that more than anything else makes the team more successful because then they’re able to deal with these challenges.
And I think that keynote went over really well with the QCon audience, that people understood like, “Oh, I can relate to that, I can make my teams better”. You might not be able to use all the new AI stuff, but you can go back and say, “Here’s how I can try and get my teams to start talking to each other better”. Like you said, Shane, the humans haven’t evolved that fast, software has.
Daniel Bryant: That’s a great quote. On that notion, are you seeing more use of other frameworks as well? Because I’m with you. In my platform engineering day job, I see DORA, every C-level person I speak to knows what DORA is, and for better or worse, people are optimizing for DORA, but I’m also seeing SPACE from [Dr] Nicole Forsgren, et al. I’m seeing DevEx from the DX folks, a few other things. And I mean space can be a bit complicated because there’s like five things, the S, the P, the A, the C and the E. but I think if you pick the right things out of it, you can focus more on the people to your point and the productivity, well, and the happiness as well, right?
Shane Hastie: Yes, we could almost go back to the Spotify metrics. The Spotify team culture metrics have been around for a long time and what I’m seeing is reinventions of those over and over again. And it’s fundamentally about how do we create that psychologically safe environment where we can have challenging conversations with a foundation of trust and respect. And that applies in technical teams of course, but it applies across teams and across organizations and the happiness metrics and there are multiple of those out there as well.
Ebecony gave us some good conversations about creating a joyous environment and also protecting mental health. Burnout is endemic in our industry at the moment, and I think it’s not just in software engineering, it’s across organizations, but mental health and burnout is something we’re still not doing a good job at. And we really need to be upping our organizational gain in that space.
Renato Losio: I think it’s been actually a very bad year in this sense as well with what you mentioned, Shane, about a manager that reached the goal, might destroy a team, make me think that this year one of the key aspects has been all the return to office mandate, regardless if it’s good or not for the company, has been a key element of a large company, became like the new trend.
Shane Hastie: There’s cartoons of people coming into the office and then sitting on Zoom calls because it’s real. The return to office mandates, and this is a strong personal opinion, they’re almost all have nothing to do with value for the organization, and they’re all about really bad managers being scared. And I’m sure I’ve just made a lot of managers very unhappy.
Daniel Bryant: Hey, people join this podcast for opinions, Shane, keep them coming.
Thomas Betts: So many times come across the Conway’s law and all different variations of this. I think Ruth Malan is the one who said that if you have an organization that’s in conflict, the organization structure and conflict of the architecture, the organization is going to win. And I’m wondering what the return to office mandates, how is that going to affect the architecture? I made the quote, I think it was last year, the comment about the COVID corollary to Conway’s law that the teams that can’t work effectively, once we all went remote, they weren’t able to produce good distributed systems because they couldn’t communicate.
Well, now we’re in that world. Everyone has adapted to that. I think we’re seeing more companies are doing distributed systems and the teams don’t sit next to each other. They have to form their own little bubbles in their virtual groups because they’re not sitting at the desk next to each other. If we make people go back to the office, but we don’t get the benefits of them having that shared office space, then what is that going to do to the software? I don’t know if I have an answer to that, but it seems like it’s not going to have a good impact if you’re doing it for the wrong reasons.
Srini Penchikala: Maybe I can bring a cheesy analogy to this, right? We started this conversation with the serverless architecture where you don’t need to run the software systems and servers all the time. They can be ideal when you don’t need them. I think that should apply to us as humans also, right? I read this quote on LinkedIn this morning, I really like this, it says, “Almost everything will work again if you unplug it for a few minutes, including you”. So we as humans, we need to unplug once in a while to avoid the burnout. I mean, that’s the only way we can be productive when we need to work, if we have to take that break or time off.
Daniel Bryant: Yes. Plus one to that, Srini. Changing subjects a little bit. But Shane, I know you wanted to touch on some of the darker human sides of tech too.
Shane Hastie: Yes, I will go there. Cybercrime, the use of DeepFakes, generative AI. I had Eric O’Neill on and the episode is titled, Spies, Lies in Cybercrime and he brings the perspective from an FBI agent and there are some really interesting applications of technology in that space and they’re very, very scary. Personally, I was caught by a scammer this year and they were the social engineering, it worked.
Fortunately, my bank’s fraud detection system is fantastic and they caught it and I didn’t lose any money, but it was a very, very scary period while we were trying to figure that out. And for me, and in Eric’s conversations, it’s almost always the social engineering piece that breaks the barrier. Once you’ve broken the barrier, then the technology comes into play. But he tells a story of a deepfake video in real time that was the chief financial officer of a large global company. So very perturbing, somewhat scary. And from a technical technologist perspective, how do we make our systems more robust and more aware? So again, leveraging the AI tools is one of the ways. So the potential for these tools is still huge.
What are the key trends in AI and data engineering? [27:01]
Daniel Bryant: Perfect segue, Srini, into your space. It’s slightly scary, but well-founded grounding there, Shane. But Yes, I think it’s almost a dual-use technology in terms of for better or worse, right? And, Srini, love to hear your take on what’s going on in your world in the AI space.
Srini Penchikala: Yes, thanks, Daniel. Thanks, Shane, that’s a really good segue. This is probably a repeat of one of the old quotes, Uncle Ben from Spider-Man movie, right? “With power comes responsibility”. I know we kind of keep hearing that, but with powerful AI technologies, how to come the responsible AI technologies. So in the AI space, again, there are a lot of things happening. I encourage our listeners to definitely watch the 2024 trends report we published back in September. We go into a lot of these trends in detail, but just to highlight a couple of things in the short time we have in today’s podcast, obviously the language models are going at a faster pace than ever, large language models, it seems like there’s no end to them. Every day you see a new LLM popping out and the Hugging Face website, you go there, you see a lot of LLMs available for different use cases.
So it’s almost like you need an LLM, large language model to get a handle on these LLMs. But one thing I am definitely curious about and also seeing the trend this year are what are called small language models. So these are definitely a lot smaller in terms of size and the data sets compared to large language models. But they are excellent for a lot of the use cases where again, talking about green software, you don’t want to expend a lot of computing resources, but you can still get similar accuracy and benefits also. So these SLMs are getting a lot of attention. Microsoft has something called Phi-3, Google Gemma, there is the GPT Mini I think. So there are a lot of these small language models are definitely adding that extra dimension to the generative AI. And also these language models are enabling the AI modeling and execution on the mobile devices like phones, tablets, and IoT devices.
Now you can run these language models on a smaller phone or a tablet or laptop without having to send all the data to the cloud, which could have some data privacy issues and also obviously the cloud computing resource and cost. So this is one of the trends I’m watching, definitely keep an eye on this. And the other trend is obviously the one I mentioned earlier called agentic AI, agent-based AI technologies, right? So this is where I think we are going to the next level of AI adoption where we have the traditional AI that’s been used for a long time for predicting the results. And then we have GenAI, which started a few years ago with the GPT and the ChatGPT announcements. So generative AI not only does the prediction, but it also generates the content and the insights, right? It goes to the next level. So I think the agents are going to go one more step further and they can not only generate the content or insights, but also they can act on those insights with or without supervision of humans.
So there are a lot of tasks that we can think of that we don’t need humans to be in the middle of the process. We can have the agents act on those. And also one of the interesting use cases I heard recently is a multi-agent workflow application where each of the agents take the output from the previous agent as the input and they perform their own task. But doing so, they’re actually also giving the feedback to the previous agent on the hallucinations and the quality of the output so the previous agent can pick a different model and rerun the task, right?
So they go through these iterations to improve the quality of the output and minimize the hallucinations and biases. So these multi-agent workflows are definitely going to be a big thing next year. So that’s one other area that I’m seeing. Also, recently, just a couple of days ago, Google actually announced Google Gemini 2.0. The interesting thing about this article is Google’s CEO, Sundar Pichai, he actually wrote a note, so he was kind of the co-author of this article.
So it was a big deal from that standpoint. And they talk about how the agentic AI is going to be impactful in the oral AI adoption and how these agentic AI models like Google Gemini 2.0 and other models will help with not only the content generation and insights, but also actually acting on those insights and performing the tasks. Real quickly on a couple of other topics, the RAG, we talked about this last year, retrieval augmented generation. So I’m seeing a couple of specialized areas in this.
One is multimodel RAG where you can use the RAG techniques for text content and also audio and the video images to make them work together for real-world use cases. And the other one is called graph RAG. Basically use the RAG techniques on knowledge graphs because the graph data is already interconnected. So by using RAG techniques on top of that will make it even more powerful. And I think the last one I want to mention is the AI-powered PCs. Again, AI is coming to everywhere.
So especially in line with the local first architectures that we are seeing in other areas, how much computing can I do locally on the device, whether it’s my smartphone or a tablet or IoT device. So this AI is going to be powering the PCs going forward. We already are hearing about Apple intelligence and other similar technologies. But Yes, other than that AI, like you all mentioned GitHub Copilot, so AI is going to be everywhere in the software development life cycle as well.
I had one use case where multiple agents are used for code generation, document generation and test case generation in a software development life cycle. It’s only going to grow more next year and be even more like a peer programmer. That’s what we always talk about. How can AI be an architect, how can AI be a programmer or how can AI be a QA software engineer. So I think we’re going to see more developments in those areas. So that’s what I’m seeing. I don’t know, Thomas, Shane or Renato, you guys are seeing any other trends in AI space?
Shane Hastie: So I’m definitely seeing the different pace of adoption. As I mentioned right at the beginning, the organizations for whom software is now their core business, but they still think of themselves as not software companies, the banks, the financial institutions and so forth. They’re struggling with wanting to bring in, wanting the benefits and really having to tackle the ethical challenges, the governance challenges, but overcoming those, recognizing those, the limited selection of tools that are available. So one organization, the AI tool they’re allowed to use is Copilot. Nothing wrong with Copilot, there are 79,000 at least other alternates. But even in that sort of big space, there’s a dozen that people should be looking at. And I think one of the risks in that is that they end up narrowing the options and not getting the real benefits.
Thomas Betts: I saw this in my company. We did a very slow rollout of a pilot of GitHub Copilot and they wanted those people to say, “Here, how do we use it? But we’re not going to just let everyone go through it”. And part of it once they said everyone can use it is you have to go through this training on here’s what’s actually happening. So everyone understood what are you getting out of it. Things like the hallucinations, you can’t assume it’s going to be correct, it’s only as good as what you give. But if it doesn’t know the answer, its job isn’t to know the right answer. Its job is to predict the next word, predict the next code. So it’s always going to do that even if that’s not the right thing. So you still have maybe even more oversight than if you had just doing a full request review or review for someone else’s code, right?
We’ve now done Microsoft Copilot as the standard that the company gets to use. I think this is probably the one you’re referring to, everyone can use the generative AI tool to start doing all the things. And because we’re mostly on the Microsoft stack, there’s the integrations with SharePoint and OneDrive and all of that benefit. So there’s reasons to stay within the ecosystem, but again, every employee has to go through just like our mandatory ethics training and compliance training and if you deal with financial data, you have to go through this extra training. If you’re going to use the AI tools, here’s what you need to know about them, here’s how you’re safe about that. And I think that training is going to have to evolve a lot year to year to year because the AI that we have in 2024 is not the same AI we’re going to have in 2026. It’s going to be vastly different.
What are the key trends in cloud technologies? [36:08]
Daniel Bryant: I think it’s a perfect segue from everyone talking about AI. Renato, our cloud space has lost its shine. We used to be doing all the innovative stuff, all the amazing things, and now with the substrate powering all this amazing innovation going on in the AI space. So you mentioned already about the sharp skew at re:Invent as one example. There’s many other conferences, but the sharp skew at re:Invent towards this AI focus. But I’m kind of curious what’s going on outside of that, Renato? What are you seeing that’s interesting in general from re:Invent and from CloudFlare and from Azure and GCP?
Renato Losio: I think that the key point that has been going on for already two, three years is that we have moved from really innovation as in any area to really a more evolutionary approach where we had new feature, but there’s nothing really super new and that’s probably good. I mean we are getting maybe more boring but more enterprise-wise maybe, but that’s the direction. Just an example, I mean, you mentioned re:Invent. People tend to go out of re:Invent, say, “This has been the best re:Invent ever”. Usually because they get more gadgets and whatever else, but that’s usually the main goal talking about sustainability. But even Amazon itself during the keynotes and during the week before re:Invent was highlighting the 10th anniversary of AWS Lambda and 10 years of Amazon Aurora, 10 years of… what was the third one?
I think container service, and then even I think KMS. And those were all announced 10 years ago at re:Invent. If you take a re:Invent today was a great re:Invent, but you don’t have something as revolutionary as Lambda. You have cool things, you have a new distributed database, yes, definitely it’s cool, but you don’t have the same kind of new breaking the world things. It’s more evolutionary thing as well in the AI space. That was of course a key part of it. But yes, there were some new financial models for Bedrock the school, they got better names that even someone like myself that is not the skilled in the area, I can say I can get when I should use a light approved or a micro model.
At least I know that the price probably follows that. But apart from that, it’s quite, I would say evolutionary. Probably the only exception in this space is Cloudflare, at least the way I see it because probably we used to consider just a CDN. We used to consider networking mostly, but actually in the last few years they’re fully fledged cloud provider. Quite interesting services that have been out there at the moment. The other trend I wouldn’t say is for 2025, but it is already here, at least in the data space, in the database space, in the cloud database space. I think this was the year that Postgres became the de facto standard. Any implementation of any cloud provider has to be somehow even pretending to be Postgres.
Daniel Bryant: Indeed.
Renato Losio: That’s the direction. Even Amazon doesn’t mention MySQL for new services for GSQL or even Limitless database earlier this year, used to be their first open source compatible database reference point. Now it’s not anymore. All the vector databases are pointing to us as well. So that’s the direction I see at the moment.
Daniel Bryant: Fantastic.
Srini Penchikala: Quickly, Daniel, I have a comment. Renato, you are right in saying that Postgres is getting a lot of attention. Postgres has a vector database extension called PG Vector, and that is being used a lot to store the vector embeddings for AI programming. And also the database are becoming more distributed in terms of the architecture and also hosting. So I’ve been seeing a lot of databases that need to run on on-prem and in the cloud with all the transactional support and the consistency support, so distributed events are kind of helping in this. So definitely like you said, just like cloud is a substrate for everything else to happen, database engineering, data engineering and databases are the foundation for all of these powerful AI programs to work. So we don’t want to lose focus on the data side of the things.
What are the key trends in DevOps and platform engineering? [40:34]
Daniel Bryant: I’ll cover the platform engineering aspects now. So for me, 2024 has definitely been the year of the portal. Backstage has been kicking around for a while. We had folks like Roadie talking about that. It’s got its own colo day at KubeCon. Now the BackstageCon, I’ve also seen the likes of Port and Cortex emerging, lots of funding going into this space and lots of innovation too. Folks are loving a UI, loving a portal, a service catalog, a way to understand all the services they’ve got in their enterprise and how these things knit together. Now I’ve argued in a few of my talks, there’s definitely a missing middle, which we’re sort of labeling as platform orchestration popping up. And this is the missing middle between something like a UI or a CLI, portal, that kind of good stuff. And the infrastructure layer, things like Terraform, Crossplane, Pulumi, cloud infrastructure in general.
Now I was super excited to see Kief Morris with his latest edition of his book, Infrastructure’s Code talking about this missing middle two and also Camille Fournier and Ian Nowland in their platform engineering book that’s just been published by Rily. Fantastic read. I thoroughly recommend folks get hold of that, but they were also talking about this missing middle as well. So I’m super excited over the next year to see how that develops. Just in general, I’m seeing platform engineering move more into the mainstream. We’re seeing more events pop up. I mean the good folks at Humanitec spun up PlatformCon in 2022. That one is going from strength to strength. There’s also Plat Eng Day at KubeCon now and KubeCon in London coming up next year, we’re going to see an even bigger Plat Eng day I think with two tracks. So I’m super excited about that. I’m definitely at QCon London.
We’re going to dive into platform engineering again, I’ve got a hat tip my good people that were on the track this year, Jessica, Jemma, Aviran, Ana and Andy did an amazing job talking about our platform engineering from their worlds. Topics like APIs came up, abstractions, automation, I often say three A’s of platform engineering, really important. And in particular, Aviran Mordo talks about platform as a runtime. And at Wix they built this pretty much serverless platform and it was a real eye-opener seeing at the scale they’re working at how they’ve really thought about the platform APIs, they’ve really thought about the abstractions to expose the developers. And a whole bunch of the stuff behind the scenes is automated Now it’s all tradeoffs, right? But Aviran, and I saw Andy said the same thing. They’re optimizing for developer experience and not just keeping people happy, but keeping people productive too.
And there’s lots of great research going on around developer experience. I’ve got to hat tip the DX folks, Abi Noda and crew and some great podcasts kicking off in that space. And I’m just really interested about that balance, that sort of business balance I guess with proving out value for the platform, but also making sure developers are enjoying their day-to-day work as well. There’s a whole bunch of platform engineering FOMO that I see in my day-to-day job and people spinning up platform engineering efforts, spinning up platforms without really having business goals associated with them, which I think is a danger. And I’ll hint at some more why I think that’s going later on.
What are our predicted trends for software delivery in 2025? [43:25]
Now it’s that time where we hold you to predictions you are going to make, and next year the InfoQ bonus is based on success or failure of these predictions. So I’d love to go around the room and just hear what you most excited about for 2025 and predictions and we’ll go in the same order if that’s all right. Thomas, we’ll start with you.
Thomas Betts: Yes, I don’t think there’s going to be some dramatic shift. There’s never dramatic shifts in architecture, but I think the sustainability, the green engineering, I think those concepts are just going to start becoming more mainstream. You look at how team topologies and microservices and all these things overlap. All these books start referencing each other, the presentations start talking about each other in the same ideas in different ways. I think we’re going to see architects that just look at it from, “I did this for all of these benefits and I learned to put all these benefits together because they were the right sustainable thing to do and it made my system better”. I want to see that presentation at QCon San Francisco next year of we chose to do some architecture with sustainability in mind, and here’s the benefits we saw from it. Shane.
Shane Hastie: I’m going to build on the people stuff. I think we’re going to see challenges with the return to office mandates. I hope we’re going to see some sensibility coming in that when we bring people together, we bring them together for the right reason and that we get the benefit of those human beings in the same physical space. Doing collaborative work generates innovation. You want to allow that, but you also want to give the space for the work that is more effective when we are remote.
So that combination of the two, and there’s no one size fits all, and organization shifting away from mandates to conversations and let’s treat our people as responsible, sensible adults and trust them to figure out what is going to be the best way of getting stuff done. I want to see the continuing evolution of the team areas, generative AI and other AI tools as partners. I think the agentic AI as a partner is going to be a huge potential and I think we’re going to start to see some good stuff happening in that space with the people. But again, the critical thinking, the human skills becoming more and more important. So what’s the prediction there? It’s maybe more of a hope.
Daniel Bryant: No, I like it, Shane, very positive. It’s very good. Srini, on to you.
Srini Penchikala: Yes. Thanks, Daniel. Yes, definitely on the AI side, I can take a shot at a couple of predictions. I think the LLMs are going to become more business domain specific in a sense, just like we used to see our banking standards, our insurance industry standards, I think we’ll probably eventually start seeing some finance, FinTech LLM or manufacturing LLM, because that’s where the real value is, right? Because a typical ChatGPT program only knows what’s out in the internet. It doesn’t know exactly what my business does, and I don’t want to share my business information out to the outside world.
But I think there will be some of these consortiums that will come together and share at least non-sensitive proprietary information among the organizations, whether it’s manufacturing or healthcare. And they will start to create some base language models that the applications in those companies can use to train, right? So it could be the large language model, like Llama 2 will have something more specific on top of that, and then applications will be built on top of that. So that’s one prediction for me. And the other one is agents. I think agents, agents and agents. Just like that movie Matrix, agents are coming, right? Hopefully these agents are not as nefarious.
Daniel Bryant: Indeed.
Srini Penchikala: They’re not villains, right? Exactly. But Yes, I think we’ll see more. I think it’s the time for all these generative AI, great content to put into action, not by humans but hopefully by computer programs. So that’s another area that I’m definitely looking forward to seeing. Other than that, I think AI will become, like you said, something like a boring cross-cutting concern that will just enable everything. We don’t even have to think about it. And maybe just like the toys we buy sometimes, they say batteries not included. Maybe in the future the application that are not using AI, which will be very few, they will say, this application does not include AI, right? Because everything else will be AI pretty much, right? So those are my couple of predictions.
Daniel Bryant: I like it, Srini, fantastic stuff. Renato, over to you.
Renato Losio: Well, first of all, I will say that these are my tech provision in the cloud for 2025 and beyond, as many good ones say. So I will always have the chance next year to say, “Well, there’s still time”. But I really think that next year will be the first year in the cloud space that for processor Intel won’t be the default anymore. Basically we won’t consider anymore using Graviton or anything else as the alternative would be the de facto on most deployment.
And the second one, giving as well how different cloud provider implemented now distributed database using their own proprietary network and basically taking advantage of the speed they have. I see the cloud provider going towards this distributed system with less regional focus. As an end user, I will probably… As a developer, I won’t carry more long term about the specific region, the geographical area, probably yes, for compliance and many other reasons. But if behind my database is Ohio or Northern Virginia or whatever else, I would probably not care so much.
Daniel Bryant: Thanks, Renato. I like the hedge there or is it a smart move. Well done. So all these, some predictions around the platform engineering space and the good folks at Gartner are saying we’re about to head into the trough of disillusionment in their model of adoption. And I also think this is true. My prediction next year is we’re going to hear more failure stories around platforms and ultimately that’ll be triggered by a lack of business goals associated with the engineering going on. Now, I think this is just part of the way it goes, right? We saw it with microservices, we saw it with DevOps as well.
Ultimately, I think it leads us to a better place. You go into the trough for disillusionment, you hopefully come out the other side on the plateau of productivity and you’re delivering business value and it’s all good stuff. And I think we’re going to bake in a lot of learnings that we sort of temporarily forgotten. This is again, the way of the world.
We often embrace a new technology, embrace a new practice, and we sort of temporarily forget the things we’ve learned before. And I’m seeing in platform engineering, definitely a lack of focus on business goals, but also a lack of focus on good architecture practices, things like coupling and cohesion. And in particular creating appropriate abstractions for developers to get their work done and also composability of the platform.
So I think in 2025, we’re going to see a lot more around golden paths and golden bricks and making it easy to do the right thing to code ship run for developers, deliver business value, and also allow them to compose the appropriate workflow for them. And again, that’ll be dependent on the organization they’re working on. But I’m super excited to see where platform engineering is going in 2025. As always, it’s your pleasure. We could talk all day, all night, I’m sure here, but it’s fantastic just to get an hour of everyone’s time just to review all these things as we close up the year. I’ll say thank you so much for everyone and we’ll talk again soon.
Shane Hastie: Thank you, Daniel.
Srini Penchikala: Thank you.
Renato Losio: Thank you, Daniel.
Thomas Betts: Thank you Daniel, and have a happy New Year.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.