Category: Uncategorized
Java News Roundup: Spring Milestone, Payara Platform, Jakarta EE 11 Update, Apache Fory

MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for June 9th, 2025, features news highlighting: the sixth milestone release of Spring Framework 7.0; the June 2025 edition of Payara Platform; point releases of Apache Tomcat, Micrometer, Project Reactor and Micronaut; Jakarta EE 11 Platform on the verge of a GA release; and Apache Fury renamed to Apache Fory.
JDK 25
Build 27 of the JDK 25 early-access builds was made available this past week featuring updates from Build 26 that include fixes for various issues. More details on this release may be found in the release notes.
JDK 26
Build 2 of the JDK 26 early-access builds was also made available this past week featuring updates from Build 1 that include fixes for various issues. Further details on this release may be found in the release notes.
Jakarta EE
In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE developer advocate at the Eclipse Foundation, provided an update on Jakarta EE 11 and Jakarta EE 12, writing:
We’re finally there! The release review for the Jakarta EE 11 Platform specification is ongoing. All the members of the Jakarta EE Specification Committee have voted, so as soon as the minimum duration of 7 days is over, the release of the specification will be approved. Public announcements and celebrations will follow in the weeks to come.
With Jakarta EE 11 out the door, the Jakarta EE Platform project can focus entirely on Jakarta EE 12. A project Milestone 0 is being planned as we speak. One of the activities of that milestone will be to get all CI Jobs and configurations set up for the new way of releasing to Maven Central due to the end-of-life of OSSRH. There will be a new release of the EE4J Parent POM to support this.
The road to Jakarta EE 11 included five milestone releases, the release of the Core Profile in December 2024, the release of Web Profile in April 2025, and a first release candidate of the Platform before its anticipated GA release in June 2025.
Spring Framework
The sixth milestone release of Spring Framework 7.0.0 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: initial support for the Spring Retry project; and a new getObjectMapper()
method, defined in the JacksonJsonMessageConverter
class, due to now-deprecated MappingJackson2MessageConverter
class that had the same method that offered the same functionality. More details on this release may be found in the release notes.
The release of Spring Framework 6.2.8 and 6.1.2 primarily provides a resolution for CVE-2025-41234, RFD Attack via “Content-Disposition” Header Sourced from Request, where an application is vulnerable to a Reflected File Download attack when a Content-Disposition
header is set with a non-ASCII character set, where the filename attribute is derived from user-supplied input. Further details on these releases may be found in the release notes for version 6.2.8 and version 6.1.21.
Payara Platform
Payara has released their June 2025 edition of the Payara Platform that includes Community Edition 6.2025.6, Enterprise Edition 6.27.0 and Enterprise Edition 5.76.0. All three releases deliver: improved deployment times using a new /lib/warlibs
directory that allows shared libraries to be placed outside individual application packages; and support for bean validation in MicroProfile OpenAPI 3.1.
This edition also delivers Payara 7.2025.1.Alpha2 that advances support for Jakarta EE 11 that includes updates to Eclipse Expressly 6.0.0, Eclipse Soteria 4.0.1 and Eclipse Krazo 3.0, compatible implementations of the Jakarta Expression Language 6.0, Jakarta Security 4.0 and Jakarta MVC 3.0 specifications, respectively.
More details on these releases may be found in the release notes for Community Edition 6.2025.6 and Enterprise Edition 6.27.0 and Enterprise Edition 5.76.0.
Micronaut
The Micronaut Foundation has released version 4.8.3 of the Micronaut Framework, based on Micronaut Core 4.8.18, featuring bug fixes and patch updates to modules: Micronaut Security, Micronaut Serialization, Micronaut Oracle Cloud, Micronaut SourceGen, Micronaut for Spring, Micronaut Data, Micronaut Micrometer and Micronaut Coherence. Further details on this release may be found in the release notes.
Micrometer
Version 1.15.1, 1.14.8 and 1.13.15 of Micrometer Metrics provides dependency upgrades and resolutions to notable issues such as: a ConcurrentModificationException
from the IndexProviderFactory
class using a HashMap
, which is not thread safe, upon building an instance of the DistributionSummary
interface. More details on these releases may be found in the release notes for version 1.15.1, version 1.14.8 and version 1.13.15.
Versions 1.5.1, 1.4.7 and 1.3.13 of Micrometer Tracing ships with: dependency upgrades to Micrometer Metrics 1.15.1, 1.14.8 and 1.13.15, respectively, and a resolution to the append(Context context, Map baggage)
method, defined in the ReactorBaggage
class, that adds new baggage values to an existing instance of the Project Reactor Context
interface, unintentionally overwrites any conflicting keys with the existing values of the baggage
parameter, not the newly provided ones. Further details on these releases may be found in the release notes for version 1.5.1, version 1.4.7 and version 1.3.13.
Project Reactor
The fourth milestone release of Project Reactor 2025.0.0 provides dependency upgrades to reactor-core 3.8.0-M4
, reactor-netty 1.3.0-M4
, reactor-pool 1.2.0-M4
. There was also a realignment to version 2025.0.0-M4 with the reactor-addons 3.5.2
and reactor-kotlin-extensions 1.2.3
artifacts that remain unchanged. With this release, Reactor Kafka is no longer part of the Project Reactor BOM as Reactor Kafka was discontinued in May 2025. More details on this release may be found in the release notes.
Similarly, Project Reactor 2024.0.7, the seventh maintenance release, provides dependency upgrades to reactor-core 3.7.7
, reactor-netty 1.2.7
and reactor-pool 1.1.3
. There was also a realignment to version 2024.0.7 with the reactor-addons 3.5.2
, reactor-kotlin-extensions 1.2.3
and reactor-kafka 1.3.23
artifacts that remain unchanged. Further details on this release may be found in the release notes.
And finally, Project Reactor 2023.0.19, the nineteenth maintenance release, provides dependency upgrades to reactor-core 3.6.18
, reactor-netty 1.1.31
and reactor-pool 1.0.11
. There was also a realignment to version 2023.0.19 with the reactor-addons 3.5.2
, reactor-kotlin-extensions 1.2.3
and reactor-kafka 1.3.23
artifacts that remain unchanged. This is the last release in the 2023.0.x release train as it is being removed from OSS support. More details on this release may be found in the release notes and their support policy.
Apache Software Foundation
Versions 11.0.8, 10.1.42 and 9.0.106 of Apache Tomcat (announced here, here and here, respectively) ship with bug fixes and improvements such as: two new attributes, maxPartCount
and maxPartHeaderSize
, added to the Connector
class to provide finer-grained control of multi-part request processing; and a refactor of the TaskQueue
class that implements the new RetryableQueue
interface for improved integration of custom instances of the Java Executor
interface which provide their own implementation of the Java BlockingQueue
interface. Further details on these releases may be found in the release notes for version 11.0.8, version 10.1.42 and version 9.0.16.
The Apache Software Foundation (ASF) has announced that their polyglot serialization framework, formerly known as Apache Fury, has been renamed to Apache Fory to resolve naming conflicts identified by the ASF Brand Management. The team decided on the new name, Fory, to preserve phonetic similarity to Fury “while establishing a distinct identity aligned with ASF standards.“

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

Tier 5 Technologies, a leading West African enterprise IT and cloud services provider, has formed a strategic partnership with global database giant MongoDB to drive digital transformation across Nigeria and the wider West African region.
The collaboration aims to position the region to capture a significant share of Africa’s projected $100 billion digital economy.
The partnership was officially unveiled during the recent “Legacy Modernisation Day” event in Lagos, which was co-hosted by both companies. The focus is on supporting organisations in modernising their IT infrastructure to fully leverage emerging technologies, including artificial intelligence, while addressing legacy system challenges that have slowed innovation in many African businesses.
READ ALSO: Abia Abduction: Sienna Bus Riders Snatched
Regional Director for Middle East and Africa at MongoDB, Anders Irlander Fabry, described the partnership with Tier 5 Technologies as a crucial move in expanding MongoDB’s footprint in Nigeria, Africa’s most populous country and one of the continent’s leading economies.
“Tier 5 Technologies brings a proven track record, a strong local network, and a clear commitment to MongoDB as their preferred data platform. They’ve already demonstrated their dedication by hiring MongoDB-focused sales specialists, connecting us with top C-level executives, and closing our first enterprise deals in Nigeria in record time. We’re excited about the impact we can make together in this strategic market,” Fabry stated.
With Africa’s digital economy expected to reach $100 billion, MongoDB and Tier 5 Technologies aim to equip organisations with cutting-edge database solutions that enable scalability, agility, and resilience. MongoDB’s modern database technologies are widely used by millions of developers and over 50,000 customers globally, including 70 per cent of Fortune 100 companies.
MongoDB’s product suite includes the self-managed MongoDB Enterprise Advanced and MongoDB Atlas, a fully managed cloud database platform available on AWS, Google Cloud, and Microsoft Azure. MongoDB Atlas, in particular, offers a compelling solution for African enterprises by addressing infrastructure limitations through cloud-based, scalable, low-latency database services that empower developers to build globally competitive applications.
The partnership represents a significant step towards unlocking Africa’s digital potential, providing businesses across the continent with the tools and technology to thrive in the increasingly competitive global marketplace.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Park Avenue Securities LLC grew its position in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 52.6% in the first quarter, according to its most recent Form 13F filing with the Securities and Exchange Commission. The institutional investor owned 2,630 shares of the company’s stock after buying an additional 907 shares during the quarter. Park Avenue Securities LLC’s holdings in MongoDB were worth $461,000 as of its most recent filing with the Securities and Exchange Commission.
Several other institutional investors and hedge funds have also recently modified their holdings of MDB. Strategic Investment Solutions Inc. IL acquired a new position in MongoDB in the 4th quarter valued at about $29,000. NCP Inc. purchased a new position in shares of MongoDB in the fourth quarter valued at approximately $35,000. Coppell Advisory Solutions LLC grew its holdings in shares of MongoDB by 364.0% in the fourth quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock valued at $54,000 after purchasing an additional 182 shares in the last quarter. Smartleaf Asset Management LLC increased its stake in MongoDB by 56.8% during the 4th quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after buying an additional 134 shares during the period. Finally, Manchester Capital Management LLC lifted its holdings in MongoDB by 57.4% during the 4th quarter. Manchester Capital Management LLC now owns 384 shares of the company’s stock worth $89,000 after buying an additional 140 shares in the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.
MongoDB Stock Performance
Shares of MongoDB stock opened at $205.63 on Friday. The company has a market capitalization of $16.69 billion, a P/E ratio of -75.05 and a beta of 1.39. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $370.00. The company’s fifty day moving average is $181.84 and its two-hundred day moving average is $226.14.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings results on Wednesday, June 4th. The company reported $1.00 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.65 by $0.35. The firm had revenue of $549.01 million for the quarter, compared to analysts’ expectations of $527.49 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm’s revenue for the quarter was up 21.8% on a year-over-year basis. During the same quarter in the previous year, the firm posted $0.51 EPS. As a group, equities analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.
Analysts Set New Price Targets
Several equities analysts have commented on the stock. Piper Sandler boosted their price target on shares of MongoDB from $200.00 to $275.00 and gave the stock an “overweight” rating in a research note on Thursday, June 5th. KeyCorp cut MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th. Monness Crespi & Hardt raised MongoDB from a “neutral” rating to a “buy” rating and set a $295.00 price target on the stock in a research report on Thursday, June 5th. Wells Fargo & Company cut MongoDB from an “overweight” rating to an “equal weight” rating and decreased their price objective for the company from $365.00 to $225.00 in a research note on Thursday, March 6th. Finally, Morgan Stanley cut their target price on shares of MongoDB from $315.00 to $235.00 and set an “overweight” rating on the stock in a research note on Wednesday, April 16th. Eight research analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has assigned a strong buy rating to the stock. According to data from MarketBeat, MongoDB currently has an average rating of “Moderate Buy” and an average target price of $282.47.
Get Our Latest Research Report on MongoDB
Insider Buying and Selling at MongoDB
In related news, Director Hope F. Cochran sold 1,175 shares of the company’s stock in a transaction that occurred on Tuesday, April 1st. The stock was sold at an average price of $174.69, for a total value of $205,260.75. Following the sale, the director now owns 19,333 shares in the company, valued at $3,377,281.77. This represents a 5.73% decrease in their position. The transaction was disclosed in a legal filing with the SEC, which is available at this hyperlink. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the transaction, the chief accounting officer now directly owns 14,598 shares of the company’s stock, valued at $2,529,103.50. The trade was a 2.02% decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last three months, insiders have sold 49,208 shares of company stock valued at $10,167,739. 3.10% of the stock is owned by company insiders.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Featured Stories
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Renato Losio
Article originally posted on InfoQ. Visit InfoQ

The recent article by Zhou Sun, “HTAP is Dead,” sparked a debate in the data community about the future of hybrid transaction/analytical processing. HTAP was meant to help integrate historical and online data at scale, supporting more flexible query methods and reducing business complexity.
In the article, Sun, co-founder and CEO at Mooncake Labs, argues that the long-promised vision of unifying transactional and analytical workloads in a single system has failed to materialize. Gartner introduced the term HTAP (Hybrid Transactional and Analytical Processing) over a decade ago, announcing it as “the next big DB architecture,” where the goal was to close the gap between operational and analytical systems.
The article tracks the history of how OLTP and OLAP database workloads started as one in the 1970s, became separated a decade later, with HTAP attempting to merge them again in the 2010s. Sun believes that practical challenges like resource contention, complexity, and evolving hardware architectures make dedicated, specialized systems a more viable path forward. Sun writes:
The cloud also started the move away from tightly coupled warehouses toward modular lakes built on object storage. In trying to escape the traditional warehouse/database, data teams started assembling their own custom systems.
HTAP was considered years ago a requirement for emerging workloads like pricing, fraud detection, and personalization, with SingleStoreDB and TiDB among the main players in the market. The author contends that cloud data warehouses like Snowflake and BigQuery emerged as the clear winners in the 2020s by focusing exclusively on analytical processing and separating storage from compute, which allowed for scalable, cost-effective solutions without the complexity of HTAP systems. Sun notes that while transactional databases have also evolved, they have largely remained separate from analytics, and attempts to merge the two have failed to gain broad adoption. Sun adds:
Even in today’s disaggregated data stack, the need remains the same: fast OLAP queries on fresh transactional data. This now happens through a web of streaming pipelines, cloud data lakes, and real-time query layers. It’s still HTAP; but through composition instead of consolidation of databases.
To move beyond traditional warehouses and databases, data teams are now assembling their own custom systems using what Sun calls “best-in-class” components. These architectures combine OLTP systems and stream processors as the write-ahead log (WAL), Iceberg as the storage layer, query engines such as Spark and Trino for data processing, and real-time systems like ClickHouse or Elasticsearch indexes. On Hacker News, Thom Lawrence, founder and former CTO of Statsbomb, writes:
You cannot say HTAP is dead when the alternative is so much complexity and so many moving parts. Most enterprises are burning huge amounts of resources literally just shuffling data around for zero business value. The dream is a single data mesh presenting an SQL userland (…) we are close but we are not there yet and I will be furious if people stop trying to reach this endgame.
Sun’s article sparked a debate in the community, with Peter Zaitsev, founder of Percona and open source advocate, summarizing:
There is no “one size fits all” – while large teams are realizing tight coupling is problematic, for small teams, small projects it is actually very convenient and practical to have a single database, which does “everything” reasonably well, as such I think HTAP makes a lot of sense as a feature, but probably not as a name as we need our databases to be more than just Analytical and Transactional.
Many data engineers now agree that the once-promising HTAP model is being reconsidered, with the growing success of PostgreSQL in recent years illustrating this shift. As technology evolves, new paradigms are challenging the relevance of HTAP in modern data architecture.
Fort Washington Investment Advisors Inc. OH Buys Shares of 2,720 MongoDB, Inc. (NASDAQ:MDB)

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Fort Washington Investment Advisors Inc. OH bought a new stake in MongoDB, Inc. (NASDAQ:MDB – Free Report) during the 1st quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission (SEC). The institutional investor bought 2,720 shares of the company’s stock, valued at approximately $477,000.
Several other hedge funds have also recently added to or reduced their stakes in MDB. Strategic Investment Solutions Inc. IL acquired a new stake in shares of MongoDB during the fourth quarter worth about $29,000. NCP Inc. acquired a new stake in shares of MongoDB during the fourth quarter worth about $35,000. Coppell Advisory Solutions LLC raised its position in shares of MongoDB by 364.0% during the fourth quarter. Coppell Advisory Solutions LLC now owns 232 shares of the company’s stock worth $54,000 after purchasing an additional 182 shares during the period. Smartleaf Asset Management LLC raised its position in shares of MongoDB by 56.8% during the fourth quarter. Smartleaf Asset Management LLC now owns 370 shares of the company’s stock worth $87,000 after purchasing an additional 134 shares during the period. Finally, Manchester Capital Management LLC raised its position in shares of MongoDB by 57.4% during the fourth quarter. Manchester Capital Management LLC now owns 384 shares of the company’s stock worth $89,000 after purchasing an additional 140 shares during the period. Institutional investors and hedge funds own 89.29% of the company’s stock.
Wall Street Analysts Forecast Growth
MDB has been the topic of a number of research analyst reports. Scotiabank raised their price target on MongoDB from $160.00 to $230.00 and gave the stock a “sector perform” rating in a research report on Thursday, June 5th. Needham & Company LLC reaffirmed a “buy” rating and set a $270.00 price target on shares of MongoDB in a research report on Thursday, June 5th. Daiwa America raised MongoDB to a “strong-buy” rating in a research report on Tuesday, April 1st. Wedbush reaffirmed an “outperform” rating and set a $300.00 price target on shares of MongoDB in a research report on Thursday, June 5th. Finally, Morgan Stanley dropped their price target on MongoDB from $315.00 to $235.00 and set an “overweight” rating on the stock in a research report on Wednesday, April 16th. Eight investment analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has issued a strong buy rating to the stock. Based on data from MarketBeat.com, the stock presently has an average rating of “Moderate Buy” and a consensus price target of $282.47.
Get Our Latest Stock Analysis on MongoDB
MongoDB Stock Down 2.4%
MDB stock opened at $205.63 on Friday. The business has a 50 day moving average price of $180.65 and a two-hundred day moving average price of $227.04. The company has a market cap of $16.69 billion, a price-to-earnings ratio of -75.05 and a beta of 1.39. MongoDB, Inc. has a twelve month low of $140.78 and a twelve month high of $370.00.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its earnings results on Wednesday, June 4th. The company reported $1.00 EPS for the quarter, topping the consensus estimate of $0.65 by $0.35. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The company had revenue of $549.01 million during the quarter, compared to the consensus estimate of $527.49 million. During the same period in the previous year, the company posted $0.51 EPS. The firm’s revenue for the quarter was up 21.8% compared to the same quarter last year. On average, equities research analysts anticipate that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.
Insider Transactions at MongoDB
In other news, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total transaction of $52,148.25. Following the completion of the transaction, the chief accounting officer now directly owns 14,598 shares in the company, valued at approximately $2,529,103.50. The trade was a 2.02% decrease in their position. The sale was disclosed in a legal filing with the SEC, which can be accessed through this link. Also, Director Hope F. Cochran sold 1,175 shares of MongoDB stock in a transaction that occurred on Tuesday, April 1st. The stock was sold at an average price of $174.69, for a total value of $205,260.75. Following the sale, the director now directly owns 19,333 shares of the company’s stock, valued at approximately $3,377,281.77. The trade was a 5.73% decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last three months, insiders sold 49,208 shares of company stock worth $10,167,739. 3.10% of the stock is owned by company insiders.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Further Reading
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Just getting into the stock market? These 10 simple stocks can help beginning investors build long-term wealth without knowing options, technicals, or other advanced strategies.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Renato Losio
Article originally posted on InfoQ. Visit InfoQ

AWS has recently announced the general availability of the CDK Toolkit Library. This new Node.js library allows developers to programmatically control the CDK to build additional automation around the CDK, exposing classes and methods to synthesize, deploy, and destroy stacks, among other capabilities.
The CDK Toolkit Library enables developers to perform CDK actions programmatically through code, rather than relying on CLI commands. Currently supported only in TypeScript, the library can be used to create custom tools, build specialized CLI applications, and integrate CDK capabilities into existing development workflows. Adam Keller, senior cloud architect at AWS, explains the main goal of the project:
Until now, the primary way to interact with the AWS CDK was through the CDK CLI, which presented challenges when building automation around the CDK as users couldn’t directly interact with the CDK toolkit natively in their code.
According to the documentation, the CDK Toolkit Library is suited for advanced infrastructure deployments, including automation within CI/CD pipelines, the creation of custom validation or approval steps, and the implementation of patterns across multiple environments.
The AWS CDK is an open-source framework that enables the definition of cloud infrastructure in code and its subsequent provisioning through AWS CloudFormation. It includes two main components: a class library for modeling infrastructure and a toolkit that provides either a command-line interface or a programmatic library to operate on those models.
The new Node.js library provides programmatic interfaces for the following six CDK actions: Synthesis, to generate CloudFormation templates and deployment artifacts; Deployment, to provision or update infrastructure; List, to view information about stacks and their dependencies; Watch, to monitor CDK applications for local changes; Rollback, to return stacks to their latest stable state; and Destroy, to remove stacks and associated resources. Keller adds:
The AWS CDK Toolkit Library opens up a whole new range of possibilities for platform engineers and developers who need finer grained control over how and when their infrastructure is deployed and tested.
Among the example scenarios provided, AWS highlights the automatic validation of application logic, maintaining ephemeral environments for integration or end-to-end testing, and cleaning up resources immediately after test completion to reduce cloud costs and configuration drift. Ran Isenberg, principal software architect at CyberArk and AWS Hero, comments:
While it’s a step in the right direction, I don’t think it replaces the deploy script we wrote to every stack. We have so many options that the CDK toolkit will not support, as it’s too specific to our needs and configurations.
More details are available on GitHub, including options to report bugs, provide feedback, share ideas, and request new features. The community has suggested exposing additional classes and functionalities, such as EnvironmentAccess, as potential future enhancements.
The CDK Toolkit Library is available in all regions where the AWS CDK is supported. A getting-started page provides instructions on how to install, configure, and customize the library.

MMS • Robert Krzaczynski
Article originally posted on InfoQ. Visit InfoQ

Meta has introduced V-JEPA 2, a new video-based world model designed to improve machine understanding, prediction, and planning in physical environments. The model extends the Joint Embedding Predictive Architecture (JEPA) framework and is trained to predict outcomes in embedding space using video data.
The model is trained in two phases. In the first, over one million hours of video and one million images are used for self-supervised pretraining without any action labels. This enables the model to learn representations of motion, object dynamics, and interaction patterns. In the second phase, it is fine-tuned on 62 hours of robot data that includes both video and action sequences. This stage allows the model to make action-conditioned predictions and support planning.
One Reddit user commented on the approach:
Predicting in embedding space is going to be more compute efficient, and also it is closer to how humans reason… Really feeling the AGI with this approach, regardless of the current results using the system.
Others have noted the limits of the approach. Dorian Harris, who focuses on AI strategy and education, wrote:
AGI requires broader capabilities than V-JEPA 2’s specialised focus. It is a significant yet narrow breakthrough, and the AGI milestone is overstated.
In robotic applications, V-JEPA 2 is used for short- and long-horizon manipulation tasks. For example, when given a goal in the form of an image, the robot uses the model to simulate possible actions and select those that move it closer to the goal. The system replans at each step, using a model-predictive control loop. Meta reports task success rates between 65% and 80% for pick-and-place tasks involving novel objects and settings.
The model has also been evaluated on benchmarks such as Something-Something v2, Epic-Kitchens-100, and Perception Test. When used with lightweight readouts, it performs competitively on tasks related to motion recognition and future action prediction.
Meta is also releasing three new benchmarks focused on physical reasoning from video: IntPhys 2, which tests for recognition of physically implausible events; MVPBench, which assesses video-question answering under minimal changes; and CausalVQA, which focuses on cause-effect reasoning and planning.
David Eberle, CEO of Typewise, noted:
The ability to anticipate and adapt to dynamic situations is exactly what is needed to make AI agents more context-aware in real-world customer interactions, too, not just in robotics.
Model weights, code, and datasets are available via GitHub and Hugging Face. A leaderboard has been launched for community benchmarking.

MMS • Rachel Laycock
Article originally posted on InfoQ. Visit InfoQ

Transcript
Shane Hastie: Good day folks. This is Shane Hastie, the InfoQ Engineering Culture podcast. Today I get to sit down with Rachel Laycock. Rachel, welcome. Thank you so much for taking the time to talk to us.
Rachel Laycock: Hi, Shane. Thanks for having me.
Shane Hastie: Now, there’s probably a few folks in our audience who don’t know who you are. So let’s start with who’s Rachel.
Introductions [00:56]
Rachel Laycock: So I am the global CTO for Thoughtworks. I’ve been at Thoughtworks for 15 years and I play lots of different technology leadership roles, but my background is as a software developer.
Shane Hastie: In your global CTO role at Thoughtworks, you get to see a lot of what is happening and you’re across many of the trends. One of the things that I know that you are responsible for or certainly deeply involved in is the Thoughtworks Technology Radar. Can you tell us a little bit about how does that come about?
The Technology Radar Process [01:29]
Rachel Laycock: So I am responsible for it, and it’s been running for over 10 years. So my predecessor started it and it started as basically a way for her to understand what was going on. Going to your point of the role, we’ve got 10,000 people across the globe in many different countries and regions. So getting a view of what’s going on and what trends are important was really challenging. And that was when we were probably a third of the size that we are now. And so essentially what we do is twice a year we kind of put a call-out to the Thoughtworkers on the ground of what’s happening, what tools and techniques and platforms and languages and frameworks are you using do you think are interesting? What have you managed to get into production, what have been your experiences with it?
And we mine that from across the globe. And then we get together in person and spend a week basically debating. So we get tech leaders from across the different parts of the globe in different roles, whether they might be head of tech or a regional CTO, or there might be a practice lead and we debate about where these things should go. So they’re in different quadrants. So it could be a language or a framework or a platform or a technique. But the real debate is whether it’s something we’re assessing, whether it’s something that goes into trial or we think it’s something that people should adopt as a default, or if we say hold and you should proceed with caution, which is actually what the hold ring means. People often don’t use it at all, but it just means like, “Hey, we’ve identified some challenges with this, so you should probably proceed with caution”.
And so we spend a week debating it and getting it down to, we try to get around a hundred blips. We use the metaphor of the Radar, so things coming in and out. And then over the course of that week, what kind of themes come up? So what are the topics of discussion is what we end up with our three to five key themes. And so it ends up being a little bit of a trend report, but it’s based completely on our experience. It’s not external research, it’s not peer reviewed except for by the people with deep experience in the room. And people often think it’s a forward-looking report, but it’s just because it’s literally the snapshot and we get it out within a few weeks. We are pretty fast in terms of publishing, but it’s actually a look back. It’s like last six months of technology at Thoughtworks.
Shane Hastie: So what has been most interesting for you in facilitating that?
What Makes the Radar Interesting [03:56]
Rachel Laycock: That’s a really good question. So what’s really interesting is where there’s a hot debate, where something, one region or one team is finding success with something, another has a different opinion or there’s maybe two tools that roughly do the same thing and people have different perspectives on it. That’s when it gets really interesting. The ones where it’s kind of like, yes, everybody thinks that’s a good idea, that’s less interesting when we get into debates is really interesting. But they’re also really hard to blip.
So we have this concept of what we call too complex to blip, where we’re basically like, we’re never getting this in two paragraphs, this whole discussion, so we’re going to have to put out an article or a podcast or something like that. So those basically go into our thought leadership backlog of things that we might write about.
So then you might see them on MartinFowler.com, you might see them on the Thoughtworks podcast, you might see a longer form article on our website that kind of gets into the nitty-gritty of the pros and cons and the nuances that are sometimes involved in discussing especially techniques, but even sometimes tools and languages and frameworks can be hotly debated, which to me is the really interesting part because as a leader, especially in technology, there’s no one tool to rule them all. There’s never, this is the one true answer. It always is, it depends. And those conversations and discussions give me as a technology leader, a deeper understanding of where those, it depends, cases lie, which gives me better tools and insights for sharing with our clients and helping them think about it as well.
Shane Hastie: What’s been the most surprising thing that’s come out of that Radar for you.
Rachel Laycock: I think that the surprising thing that came out of the Radar is the amount of books and key thought leadership that set the tone in the industry. And I’ll use microservices as an example. I remember being in the discussion when that was being discussed and like any new thing, it was very hotly debated. Some people were like, that doesn’t seem like a good idea. Here’s all the problems associated with it because there are lots of challenges. We’re talking about a very complex architecture that requires a lot of skills in the teams to be able to build in that way and run software in that way. And so it was the kind of thing that was hotly debated. And then that, it started off as an article and finally got .com and then became the book. And I myself did plenty of talks on the conference circuit about the pros and cons of microservices and when you should do it and when you shouldn’t.
It was a big side effect that I don’t think anyone planned. It wasn’t like we went into that and be like, we’re going to get together every six months and we’ll produce this Radar and then just assume that books and other great things are going to come off the back of that. And so it was much more organic. First of all, the Radar was only supposed to be an internal publication, but when we started sharing the insights with clients, they were like, “Oh, that’s really helpful”. So then we started publishing externally and then the books followed from that.
Data mesh is another example. I remember that also being discussed in the Radar conversation of another technique, another approach. Again, very hotly debated internally. It wasn’t just the thousands of Thoughtworkers said, yes, this is a great idea. It was like, let’s see some use cases and see how it plays out. And then they eventually become kind of the canonical book. So it’s been exciting to be part of that journey, but it’s surprising. You wouldn’t have expected it.
Shane Hastie: So this is the engineering culture podcast. What is it about the culture of the organization and that group that enables this to happen?
Culture and Organizational Dynamics [07:23]
Rachel Laycock: Well, the Thoughtworks itself, I recently published an article, ’cause obviously everybody’s talking about AI and software right now and how productive we can be, and I pointed out that I don’t think we’ve ever hired anyone at Thoughtworks just because of how fast they can code, because it’s never just been around just coding. It’s always been around attitude, aptitude, integrity. Those were the three kind of, I guess values that we hired for. But there’s also a curiosity to Thoughtworkers, a constant. If you look at our sensible defaults and continuous improvement, continuous learning, curiosity, these types of things, there’s a lot of, I guess statements and things we say at Thoughtworks, like strong opinions, loosely held. And so when you then bring together leaders, especially if they’ve grown up at Thoughtworks, you come into the room and people are not afraid to say what they think and they’re also happy to be told they’re wrong.
And I’ve heard people that have come from other organizations be not used to that at all, where you have to be careful who you say in front of which people, and if you say the wrong thing, it could be a career limiting move. At Thoughtworks, it’s really not like that for better or worse. Very challenging as a leader when everybody’s always challenging you and asking you, “But did you think about this? And have you asked the right question?” But you bring that into the room when we’re discussing technology and you end up with really thoughtful perspectives that have taken into account different opinions and people change their mind throughout the process of that week. Maybe not always, and maybe not on everything, but I do find that quite unique to Thoughtworks. I will say that we’ve helped a lot of clients really like the Radar, and we’ve found that helping them build their own radar for their own organization has been super helpful.
And then as they progress down the path and they’ve done it a few times, they’ll be like, “Well, how do you handle this and how do you handle this?” And I’m like, oh, those are all the challenging exceptions of just dealing with people in a room with lots of different opinions. But this is a great thing. It means you’ve created the culture of people being able to express their opinion and hear the different voices in the room and come to a reasonable conclusion. So I just think that without Thoughtworks kind of culture, I don’t think we would have the Radar.
Shane Hastie: If I want to, and you’ve given us some pointers there, but if I wanted to instill something like that culture in an organization, where do I start?
You Can’t Copy Culture – You Can Encourage It [09:46]
Rachel Laycock: That’s really challenging because it’s hard to change culture of an existing organization. They say if it’s easy to change, it was never part of the culture in the first place. So I think the thing about Thoughtworks is its culture is kind of what I said around attitude, aptitude, integrity, curiosity, continuous improvement, and the fact that the culture was also built around agile, it was agile right from the start, has created that kind of culture. But that’s not to say you can’t introduce some of those into a different organization.
So whenever I’ve been helping clients go on the agile journey, the continuous delivery journey, the microservices journey, the digital transformation journey, all the different journeys that how we’ve constantly renaming things, but it’s often the same kinds of concepts that we bring to the fore. And I always say to clients, you won’t be Thoughtworks at the end of this. And that’s not the intent, right? The intent is that you take some of the best things about us that fit in with your culture and you transform your culture because if you don’t transform your culture into something that’s around continuous improvement and continuously evolving software instead of an old mindset of build, deploy, run, move on, there’s certain aspects to the culture that have to change in order to get into that continuous improvement, continuous evolution mindset. And you can bring those to the fore.
And some of the ceremonies help, although I’m not a fan of certifications that are built around just ceremonies, but they have to have intent. The reason why you do a stand-up every morning is so that you can quickly adjust if people are heading in the wrong direction and everyone has a shared context. The reason why you do retrospectives is so that you actually improve how the team is working and the ways of working for the team. If you just do those things, but you’re not clear on the intent, then you don’t get the value. And so I think when you start to introduce these types of ceremonies that are a part of XP, that are part of agile, that are part of what people have been doing with digital transformation with clear intent, then you can start to bring some of that culture along.
And then of course, another critical piece is the recruiting, as I said, attitude, aptitude, integrity has always been our thing. It was never about, we must hire from these universities and people have to have these things. It was always about who they are and what they brought to the table and what their approach was. And if they were essentially up for continuously learning and adapting, and most of the time we got that right, nobody gets recruiting right a hundred percent of the time, but most of the time we got that right and we were able to continue to grow the culture that we wanted.
Shane Hastie: Shifting tack a tiny bit, and you did touch on it when we were talking about the Radar, the efficiency focus that seems to be so prevalent today with generative AI. We’re going to bring in the co-pilots, we’re going to generate huge amounts of code and we’re going to be so much more efficient. I don’t see that really happening, do you?
AI Efficiency Hype and Reality [12:41]
Rachel Laycock: No, I don’t, to be honest, to be totally blunt. And even when we are more efficient, people will build that in by default and will no longer be more efficient. I’ll tell you what I mean by that. So let’s say you measure efficiency in your organization through velocity, or how long does it take you to do so many story points? Well, you can almost do a kind of from them to there at this point in time, but once people start adopting those tools, they’re going to estimate those story points and that velocity based on the tools that they’re using.
What I’ve noticed is this is not like the agile movement, which was from the development teams driven by the engineers of recognizing that this waterfall approach was not helping us in many of the cases in terms of building software. And so if we take XP practices of pair programming to test-driven development, continuous integration, these kinds of things, and then some of the things I talked about earlier, like stand-ups and retrospective, that’s going to help us move fast as well as have high quality resilient features out the door.
But it was driven by engineers and by software development teams, not just engineers, also project managers and other folks that were part of the development teams. This focus on efficiency comes from the top down. And most of the technology leaders that I speak to are like, “My board’s putting pressure on me to measure efficiency and then tell me how much faster I am”. And I’ve been hearing this for a year and a half now, where they’re coming to me and saying, “What’s your efficiency metric? How are you measuring it?” And it’s notoriously hard to measure, by the way, for more reasons than I can even name here. But it’s also the wrong focus because there’s the issue of building high quality products at speed, at scale that are resilient in production is never been how much faster can I write code? That’s never been the problem.
The problem is often the legacy systems that are hampering their ability to move forward, it could be some processes and ways of working that are hampering their ability to move forward. It could be alongside the legacy. It’s like they don’t have the right continuous integration and continuous delivery and deployment pipelines in place and they don’t the right testing in place. And these are problems that I’ve seen time and time again in organizations that are the real barriers to them moving effectively and achieving results effectively. And honestly, at the end of the day, these tools, whether it’s code generation or coding assistance, they amplify indiscriminately. So you can write code faster, but it doesn’t mean that it’s high quality code, not if you don’t have the right guardrails in place. And so you could actually create more problems, right?
It’s like, okay, now I can write twice as much code. And it’s like, cool. Now you’ve got twice as much technical debt than you had before. And what was your biggest problem before in terms of being able to move quickly? Oh, it was technical debt. It wasn’t actually writing features faster.
And so I’m hopeful that as an industry, we’ll kind of move away from board-driven development as I’ve started calling it and back into, okay, let’s get these tools into the hands of the engineers and into the people that are part of the product software development life cycle. And then let’s see what great things they can do with them to solve some of the really intractable problems in software around technical debt, around legacy modernization, around running existing systems, around making systems more resilient instead of the hyper focus on let’s just build more and build it faster.
But I have a strong opinion on that. I’m happy for it to be weakly held and somebody to prove me wrong, but it hasn’t happened yet and it’s been two years.
Shane Hastie: If we do get these tools in the hands of the right people for the right reasons, what are some of the potential outcomes that you can see happening there?
Practical AI Applications [16:25]
Rachel Laycock: Well, one of the things I saw really early on when we gave some of our developers access to these tools that we’re dealing with some really challenging problems in modernizing legacy systems is the ability to use these tools alongside other techniques to do code comprehension. So to understand code bases that you can only understand with an SME. And I’m looking at things like mainframes and COBOL, but that’s not the only ones. There’s plenty of other code bases written in all kinds of languages that very few people in an organization really understand or really have context of. They were written in a time, maybe there wasn’t great documentation, there wasn’t much testing. They require that SME. And we saw people immediately starting to see results of just being able to comprehend and interrogate what a code base was doing. And I did a video on this at YOW! last year, so you can find that and Google it, but I talk about what was the techniques that we used to do that. So that was one.
There’s another organization that we just started partnering with called Mechanical Orchard, which founded by the people that founded Pivotal Labs, again, big proponents of XP practices. And they’ve started to use generative AI not only to understand existing code bases, but to actually transform them from old style code bases into new style code bases. And I’m not talking about just moving it from COBOL to Java, and it still looks like COBOL and it’s affectionately called JOBOL, but I’m talking about really being able to build out the test harness and then transform the code and then check that it’s performing at the other end. And so there’s some really interesting stuff going on there as well. And then I think what’s also an important factor is when you get these tools in the hands of really experienced developers, they can test the edges of what these things can do really well and where the gaps still are.
AI Coding Mirroring the Microservices Intent [18:16]
And I’ll use the example of when we first were introducing the concept of microservices, one of the early concepts was, these very modular small pieces, if you build it in such a way that it’s small enough that you can easily comprehend it, then when you want to make changes to it, you don’t change it, you just rebuild it, which I don’t think anyone ever really did because there’s still effort involved in doing that. But let’s say you did have a really nicely architected modular system and you’ve built great test harnesses, and that’s all in a pipeline. Well, maybe with Generator you could rebuild small modular components quite easily. So that’s where I think it starts to get interesting is like, what can we do with the code base based on the current state of that code base? If it’s well architected and very modular, you could probably do different things with it versus it being a legacy code base. How can we take a legacy code base and turn it into something that’s well architected and modular?
But I think what will be really important will be how we specify the software and how we verify it. And so organizations that have gone to the effort of having really strong guardrails and really good verification in their systems with continuous deployment and continuous delivery, I think are going to be able to do more interesting things with these tools earlier than those that are not in that state. And so I’ve not exactly predicted exactly where it’s going, but I think those are the things that we’re exploring. And I think those are the things that start to get interesting when you put it in the hands of the people who are really solving problems day in and day out in software.
Shane Hastie: Will I one day be able to take my monolith and drop it into a funnel and out of the end comes full microservices?
The Monolith to Microservices Question [19:58]
Rachel Laycock: One day, maybe not in the near term. In the near term, these tools can help you do that, but they’re not going to do it for you. It’s not insert code base here out pops well-architected modular architecture. It’s going to take still a lot of humans in the loop along the way. And that’s probably a good thing. ‘Cause I think that the hype around the end of the software engineer I think is greatly overestimated right now. But I do think the role will change and the kinds of things that you do day in and day out could change based on the tools, but that’s always been true. Once we started using IntelliJ and IDEs, we typed a lot less, we’re probably going to be typing even less, but the understanding of the architecture of the system and getting that right for how you want the thing to run in production, that still requires real depth of experience. And I don’t think that’s going anywhere anytime soon.
All the POCs and all the hype I see we’re talking about like, “Oh, look, I can build this app in five minutes and it used to take me days”. And I’m like, yes, but it’s a single app. It’s like, that’s not what most of us are doing. When we’re building scaled enterprise software, we’re not just building one random app. That’s really not the problem we have to solve. It’s fun and it’s cool, great side project, and I’m all for vibe coding, building my own little apps at home, but in production guardrails are required.
Shane Hastie: So one of the things that sits in my mind is these tools are really good for experienced engineers. How do we build that experience?
Building Developer Experience [21:34]
Rachel Laycock: That’s a great question. It’s going to get a lot more challenging. Right now, the way we build that experience is we have what we call leverage across a product team. So you have experienced people, you have some mid-level experienced people, and then you’ll have some fresh out of university or one to three years in their role and mixing them together, you get that kind of mentoring situation where they learn from each other. And then obviously most of us learned from the first time we put something in production and it went wrong, is that it’s usually the hard lesson that really teaches you about the importance of good testing and feature toggles and all of this good stuff. And we do a lot of that. At least since I’ve been in the industry, we’ve been using IDEs more or less. And so yes, a lot of it is auto complete, but most of the debugging and everything you had to figure out yourself.
Now, if you introduce tools that are helping you do the debugging or they’re helping you fix the tickets, to me, that’s where a lot of the learning is often is when things go wrong. If it always goes right, then you only ever learn the happy path. And that is the thing that’s puzzling to me is that if we get to a stage where we’re able to build more software and we’re able to rebuild more software faster and more effectively, then less people will be running bigger systems because there’ll always be more software to build. Nobody’s run out of ideas of products, things that they want to build, but it brings up the question of like, well, how do you grow really deep expertise in folks that, they’re at higher levels of abstraction. So when something goes wrong, if the AI can’t help them, how are they going to figure it out?
And that’s kind of a puzzle to me. And I think there’s various different tests we’re running in terms of different shapes and team sizes that you can leverage with different tools, but I don’t have an answer to that, what the shape of a team will look like and how we’ll grow experience in engineers in the future when it changes dramatically. And I’m sure that the industry faced this problem when we moved into the layer of abstraction where we longer had to worry so much about the performance of the machine, and a lot of that was taken care of. People were like, “Well, who’s going to care about that? And what if something goes wrong?”
And in the end it’s like we track, we have the kinds of verification in place that tells us if the performance is not going well, and then we’ll go and in there and debug stuff. And we do seem to figure it out. But yes, it’s an open question for me. I’m sure we’ll get to a place in the industry where we figure out how to create new career paths for people, but there’s a lot of unknowns right now, and that in itself just generates a lot of risk and fear.
Shane Hastie: Risk and fear and the turbulence that we’ve seen in the industry. And as a result of that massive disengagement and all of those things, how do we shift as an industry? What are the things we can do to get better?
Addressing Workforce Concerns and Hype-cycle Effects [24:34]
Rachel Laycock: Yes, it’s a good point. I think the challenge with all this hype and all this noise around, oh, we won’t need software engineers anymore, the agent’s going to do everything, is it does disengage the workforce. And in fact, what I predict is we’re still going to need engineers that really understand systems. It’s just that they’ll probably have coverage over more systems because a lot more of it can be automated, which is great, but we still need them in those roles. I don’t see them going anywhere anytime soon. And I am worried that with all the hype, I mean the technologists are getting disengaged. It’s hard to get people excited and say like, “Hey, use this tool and see what we can do with it”, if they’re being told, “Use this tool and you won’t have a job in five years”.
So I guess I’m just hoping that as an industry, we get over this hype cycle and I’m starting to see signs of it in the news, but we’ll see, where the models settle down a little bit. The tools settle down a little bit, and then it’s more incremental, the change. And then we’ll start to know, okay, well, how do things change with these models and with these tools?
But it certainly, for inside Thoughtworks, what I’ve been saying is it’s not me, the CTO or the technology leadership that’s doing this to you, this is happening in the industry. And I recognize this is coming from top down. It’s not you guys saying, “Hey, we found these cool tools”, which is kind of the Radar, going back to the earlier conversation. The Radar helps us identify what kinds of things we wanted to tell our clients to use because our teams were like, “These tools are so much better. Can we get clients to use them?” But I’m trying to engage people with I believe that we’re still going to need deep technology professionals, and I want to help everybody at Thoughtworks learn to be what the new version of that is.
Now, I could be wrong. But so could the hundred other 200, 300 other people spousing different perspectives out in the world? No one knows exactly how things are going to turn out. But I do think it’s really important as technology leaders to try and figure out how to engage people and get them excited about this. Because if they don’t, it’s just going to keep coming from the outside in, from the board, from organizations that are incentivized to make a lot of noise about this. And I believe we’re going to need deep technologists, especially if they’re covering more systems in the future. And so I don’t want people to get disinterested in the industry or look to go into another field or decide not even to join. And that’s what worries me the most actually right now, is that people coming out of university is like, “Oh, well, there’s no point. Everybody’s saying that this won’t be a job in two years”. I just don’t buy that. And I think that’s really problematic.
Shane Hastie: How do I tell my granddaughter that there’s a good career in technology today?
Technology Careers Remain Viable Despite AI Advances [27:25]
Rachel Laycock: Well, my view is technology is not going anywhere. It’s like the machine right now can’t do complex design. It doesn’t know intent. It doesn’t know why the software was built, why it was. It can tell you what it’s doing, but it doesn’t know why. And so that’s where humans fit in. Good design still comes from humans. And at the end of the day, these large language models are just built off past knowledge. There’s still a lot of creativity that goes into software. And I think it’s this constant thinking about software in terms of we’re building bridges. We lean into these engineering metaphors, and I just wish that they would die actually, because it’s just as much like an art and a craft that it is a science and it’s evolved so much because there’s so much creativity that humans can bring to it.
And so as I said earlier, will the day-to-day tasks of a software developer change? For sure, but is technology going anywhere? Is the machine going to build it and run it all for us? Not anytime soon, I believe. And so I do think that there’s still plenty of roles in the technology industry for all of us. But predicting what’s going to happen in 10 years, that’s much more challenging.
Shane Hastie: Rachel, really interesting conversation. Thank you so much for taking the time to talk to us today. If people want to continue the conversation, where do they find you?
Rachel Laycock: Oh, that’s easy. I’m on LinkedIn. You can just search for me and ping me and I’m happy to talk.
Shane Hastie: Thank you so much.
Rachel Laycock: All right. Nice to meet you, Shane. Thank you.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

MMS • Almir Vuk
Article originally posted on InfoQ. Visit InfoQ

The .NET team has released Dev Proxy version 0.28, introducing new capabilities to improve observability, plugin extensibility, and integration with AI models. A central feature of this release is the OpenAITelemetryPlugin, which, as reported, allows developers to track usage and estimated costs of OpenAI and Azure OpenAI language model requests within their applications.
The plugin intercepts requests and records details such as the model used, token counts (prompt, completion, and total), per-request cost estimates, and grouped summaries per model.
According to the announcement, this plugin supports deeper visibility into how applications interact with LLMs, which can be visualized using external tools like OpenLIT to understand usage patterns and optimize AI-related expenses.
The update also supports Microsoft’s Foundry Local, a high-performance local AI runtime stack introduced at the Build conference last month. Foundry Local enables developers to redirect cloud-based LLM calls to local environments, reducing cost and enabling offline development.
As stated, Dev Proxy can now be configured to use local models, quoting the following from the dev team:
Our initial tests show significant improvements using Phi-4 mini on Foundry Local compared to other models we’ve used in the past. We’re planning to integrate with Foundry Local by default, in the future versions of Dev Proxy.
To configure Dev Proxy with Foundry Local, developers can specify the local model and endpoint in the languageModel section of the proxy’s configuration file. This integration offers a cost-effective alternative for developers working with LLMs during local development.
Regarding the .NET Aspire users, a preview version of Dev Proxy extensions is now available. These extensions simplify integration with Aspire applications, allowing Dev Proxy to run either locally or via Docker with minimal setup. As reported, this enhancement improves portability and simplifies the configuration process for distributed development teams.
In addition, support for OpenAI payloads has been expanded. As stated, previously limited to text completions, Dev Proxy now includes support for a wider range of completion types, increasing compatibility with OpenAI APIs.
The release also brings enhancements to TypeSpec generation. In line with TypeSpec v1.0 updates, the plugin now supports improved PATCH operation generation, using MergePatchUpdate to clearly define merge patch behavior.
As noted in the release, Dev Proxy now supports JSONC (JSON with comments) across all configuration files. This addition enables developers to add inline documentation and annotations, which can aid in team collaboration and long-term maintenance.
Concurrency improvements have also been made in logging and mocking. These changes ensure that logs for parallel requests are grouped accurately, helping developers trace request behavior more effectively.
Two breaking changes are included in this release. First, the GraphConnectorNotificationPlugin has been removed, following the deprecation of Graph connector deployment via Microsoft Teams.
Furthermore, the –audience flag in the devproxy jwt create command has been renamed to –-audiences, while the shorthand alias -a remains unchanged.
The CRUD API plugin has been updated with improved CORS handling and consistent JSON responses, enhancing its reliability in client-side applications.
Finally, the Dev Proxy Toolkit for Visual Studio Code has been updated to version 0.24.0. This release introduces new snippets and commands, including support for the already mentioned OpenAITelemetryPlugin, also improved Dev Proxy Beta compatibility, and better process detection.
For interested readers, full release notes are available in the official repository, providing a complete overview of features, changes, and guidance for this version

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

In a thorough article, Tripadvisor iOS principal engineer Ben Sarrazin described their journey toward adopting The Composable Architecture (TCA) for their existing iOS app, moving away from the Model-View-ViewModel-Coordinator (MVVM-C) architecture.
Sarrazin explains that the decision to move away from MVVM-C was driven by several factors, all tied to increasing app complexity and a growing team. One of the pain points was navigation:
Perhaps the most painful aspect of our MVVM-C implementation is the navigation structure — or more accurately, the lack of one. Our coordinators can launch any other coordinator, creating a web of navigation possibilities that is nearly impossible to document or reason about.
For example, this complexity became evident when an anonymous user visiting the site attempted an action requiring authentication. Even in such a common scenario, the MVVM-C navigation involved numerous coordinators, view models, and event mappers, making the codebase hard to understand and modify.
Another challenge stemmed from the coordinators’ reliance on UIViewControllers, which added complexity and required using Combine as a communication layer between SwiftUI and UIKit.
Debugging Combine-based event chains proves exceptionally difficult, especially when they traverse multiple coordinators and are composed of several layers of publishers and operators.
TCA, by contrast, promised several advantages, such as seamless integration with SwiftUI, robust testing capabilities, and improved composability. The Tripadvisor team also valued TCA’s evolution and maturity, along with its high-quality documentation and support.
To handle the migration, the Tripadvisor iOS team adopted a bottom-up approach. They began by identifying view models without children and replacing them with TCA stores, then gradually worked their way up to parent view models.
A similar “leaf-to-root” strategy was applied to navigation elements, but with a twist. In fact, since TCA requires centralized, state-driven navigation, coordinators were not replaced one-to-one. Instead, parent coordinators assumed the navigation responsibilities of their children. This ultimately resulted in a single source of truth for navigation: a global router implemented as a TCA reducer.
This navigation consolidation represents perhaps the most transformative aspect of our migration. Where we currently have dozens of coordinators with overlapping responsibilities and complex interactions, we’ll eventually have a clean, state-based navigation system that’s both more powerful and significantly easier to understand.
The migration required a complete mindset shift, explains Sarrazin, and came with some challenges. One key insight was that replicating the existing feature hierarchies in TCA wasn’t always the best approach. Instead, the team learned to take into account the implications of sending actions between parent and child components, which can lead to excessive bidirectional communication. A better pattern, they found, was to centralize shared behaviors in parent components where possible.
Another challenge arose when too many actions were dispatched in a short time span, for example, while scrolling through a list. To address this, the team found it effective to debounce high-frequency inputs and minimize the number of actions sent to the store, favoring simple state updates within reducers instead.
An area where TCA also brought many benefits was testing, helping reduce test brittleness.
We found that tests written with TCA’s TestStore provided much stronger guarantees about application behavior. A test that passes gives us high confidence that the feature works as expected, which wasn’t always true with our previous testing approach, especially with the heavy dependency on Combine and schedulers.
Additionally, the team found that TCA tests often served as a form of design feedback: when a test became hard to read or write, it was usually a sign that the underlying code could be improved.
Overall, the migration proved very effective, according to Sarrazin. His article offers many valuable insights that go beyond what can be covered here. Do not miss it if you’re interested in the full details.