CloudMile wins MongoDB Emerging Markets Partner of the Year award – IT Brief Asia

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

CloudMile has been acknowledged with the 2024 MongoDB Emerging Markets Partner of the Year award, recognising its contributions to data modernisation efforts in Southeast Asia. This accolade highlights the successful collaboration between CloudMile and MongoDB in fostering business growth within the region.

Jeremy Heng, Southeast Asia Lead at CloudMile, expressed gratitude for the award: “We are deeply honoured to receive the MongoDB Emerging Markets Partner of the Year award. This recognition is a testament to our team’s relentless dedication, deep MongoDB expertise, and unwavering focus on delivering transformative data and AI solutions to our clients.”

“As we continue our journey of helping organisations leverage data and AI to drive transformative change, we remain committed to our partnership with MongoDB and to delivering cutting-edge solutions that unlock new possibilities for customers,” Jeremy Heng said.

The partnership between CloudMile and MongoDB, initiated in 2023, has focused on leveraging MongoDB Atlas to enhance the data infrastructure of various enterprise clients. MongoDB Atlas stands out for its ability to unify operational, unstructured, and AI-related data, streamlining the development of AI-enriched applications.

Stewart Garrett, Regional Vice President for ASEAN and Japan at MongoDB, spoke about the productive collaboration between the two companies: “Our partnership with CloudMile hit the ground running last year, and it was clear from the start that they shared our passion for helping customers in the region find the right technology to innovate quickly.”

“Working closely with CloudMile, we’re able to be part of the solution that helps businesses in growing markets incorporate the latest generative AI solutions into their tech stack to enable the development of next-generation applications. It’s our sincere pleasure to recognise their efforts with this year’s MongoDB Emerging Markets Partner of the Year award,” Stewart Garrett stated.

A notable example of their collaborative success is CloudMile’s engagement with a gaming company in the Asia-Pacific region, which boasts over 1 million active members. This company struggled with the limitations of traditional relational databases, which impacted the user experience. By employing MongoDB Atlas, CloudMile was able to effectively manage and analyse the gaming company’s rapidly growing data, significantly improving the user experience and customer satisfaction.

CloudMile’s achievements have received multiple acknowledgements this year, including being named a Google Cloud Sales & Services Partner of the Year for Singapore and the Google Cloud Social Impact Partner of the Year for the APAC region. These awards underscore CloudMile’s excellence in delivering innovative cloud-based solutions.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


American International Group Inc. Grows Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

American International Group Inc. grew its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 433.3% during the fourth quarter, according to the company in its most recent filing with the Securities and Exchange Commission. The institutional investor owned 6,090 shares of the company’s stock after purchasing an additional 4,948 shares during the period. American International Group Inc.’s holdings in MongoDB were worth $2,490,000 as of its most recent filing with the Securities and Exchange Commission.

A number of other hedge funds also recently made changes to their positions in the stock. Jennison Associates LLC boosted its position in MongoDB by 3.3% during the 4th quarter. Jennison Associates LLC now owns 3,856,857 shares of the company’s stock worth $1,576,876,000 after buying an additional 122,893 shares during the period. Norges Bank bought a new position in MongoDB during the 4th quarter worth approximately $326,237,000. First Trust Advisors LP raised its holdings in MongoDB by 59.3% during the 4th quarter. First Trust Advisors LP now owns 549,052 shares of the company’s stock worth $224,480,000 after purchasing an additional 204,284 shares during the last quarter. Northern Trust Corp raised its holdings in MongoDB by 5.5% during the 3rd quarter. Northern Trust Corp now owns 448,035 shares of the company’s stock worth $154,957,000 after purchasing an additional 23,270 shares during the last quarter. Finally, Axiom Investors LLC DE bought a new position in MongoDB during the 4th quarter worth approximately $153,990,000. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Performance

Shares of NASDAQ:MDB traded down $7.31 during midday trading on Tuesday, reaching $226.61. The company had a trading volume of 1,660,374 shares, compared to its average volume of 1,521,207. The company’s 50 day moving average price is $310.02 and its 200 day moving average price is $369.21. The company has a current ratio of 4.93, a quick ratio of 4.93 and a debt-to-equity ratio of 0.90. The stock has a market cap of $16.62 billion, a P/E ratio of -80.64 and a beta of 1.13. MongoDB, Inc. has a 52-week low of $214.74 and a 52-week high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Thursday, May 30th. The company reported ($0.80) earnings per share for the quarter, hitting analysts’ consensus estimates of ($0.80). The company had revenue of $450.56 million for the quarter, compared to the consensus estimate of $438.44 million. MongoDB had a negative net margin of 11.50% and a negative return on equity of 14.88%. As a group, sell-side analysts anticipate that MongoDB, Inc. will post -2.67 EPS for the current fiscal year.

Analysts Set New Price Targets

Several research firms have issued reports on MDB. KeyCorp lowered their price target on shares of MongoDB from $490.00 to $440.00 and set an “overweight” rating for the company in a report on Thursday, April 18th. Robert W. Baird lowered their price target on shares of MongoDB from $450.00 to $305.00 and set an “outperform” rating for the company in a report on Friday, May 31st. Loop Capital lowered their price target on shares of MongoDB from $415.00 to $315.00 and set a “buy” rating for the company in a report on Friday, May 31st. Stifel Nicolaus dropped their target price on shares of MongoDB from $435.00 to $300.00 and set a “buy” rating on the stock in a research report on Friday, May 31st. Finally, Wells Fargo & Company cut their target price on shares of MongoDB from $450.00 to $300.00 and set an “overweight” rating on the stock in a research note on Friday, May 31st. One research analyst has rated the stock with a sell rating, five have issued a hold rating, nineteen have assigned a buy rating and one has given a strong buy rating to the company’s stock. According to MarketBeat, MongoDB has a consensus rating of “Moderate Buy” and a consensus price target of $361.30.

Check Out Our Latest Research Report on MDB

Insider Buying and Selling at MongoDB

In other news, CEO Dev Ittycheria sold 17,160 shares of the stock in a transaction dated Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total value of $5,973,567.60. Following the sale, the chief executive officer now directly owns 226,073 shares of the company’s stock, valued at $78,698,272.03. The sale was disclosed in a legal filing with the SEC, which is available at this link. In related news, Director Hope F. Cochran sold 1,174 shares of the company’s stock in a transaction that occurred on Monday, June 17th. The shares were sold at an average price of $224.38, for a total value of $263,422.12. Following the transaction, the director now directly owns 13,011 shares in the company, valued at $2,919,408.18. The transaction was disclosed in a document filed with the SEC, which can be accessed through the SEC website. Also, CEO Dev Ittycheria sold 17,160 shares of the company’s stock in a transaction that occurred on Tuesday, April 2nd. The stock was sold at an average price of $348.11, for a total transaction of $5,973,567.60. Following the completion of the transaction, the chief executive officer now owns 226,073 shares in the company, valued at $78,698,272.03. The disclosure for this sale can be found here. Insiders have sold a total of 49,976 shares of company stock valued at $17,245,973 over the last three months. 3.60% of the stock is currently owned by insiders.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Growth stocks offer a lot of bang for your buck, and we’ve got the next upcoming superstars to strongly consider for your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JEP 456: Preparing for the Removal of Unsafe Memory-Access Methods

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

JEP 471, Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal, has been delivered for JDK 23. This JEP proposes to deprecate the memory access methods in the Unsafe class for removal in a future release. These unsupported methods have been superseded by standard APIs, namely, JEP 193, Variable Handles, delivered in JDK 9; and JEP 454, Foreign Function & Memory API, delivered in JDK 22.

The primary goal of this deprecation is to prepare the ecosystem for the eventual removal of sun.misc.Unsafe‘s memory-access methods. By highlighting their usage through compile-time and runtime warnings, developers can identify and migrate to supported replacements. This transition aims to ensure that applications can smoothly adapt to modern JDK releases, enhancing security and performance.

Two standard APIs now provide safe and efficient alternatives to sun.misc.Unsafe. The VarHandle API, delivered in JDK 9 through JEP 193, offers methods to safely manipulate on-heap memory, ensuring operations are performed efficiently and without undefined behaviour. The Foreign Function & Memory API, delivered in JDK 22 through JEP 454, provides safe off-heap memory access methods, often used in conjunction with VarHandle to manage memory inside and outside the JVM heap. These APIs promise no undefined behaviour, long-term stability, and better integration with Java tooling and documentation.

The deprecated sun.misc.Unsafe methods fall into three categories: on-heap, off-heap, and bimodal (methods that can access both on-heap and off-heap memory). The on-heap methods are:

long objectFieldOffset(Field f)
long staticFieldOffset(Field f)
Object staticFieldBase(Field f)
int arrayBaseOffset(Class arrayClass)
int arrayIndexScale(Class arrayClass)

These methods can be replaced by VarHandle and MemorySegment::ofArray with its overloaded methods. For example, consider the following example:

class Foo {

    private static final Unsafe UNSAFE = ...;    // A sun.misc.Unsafe object

    private static final long X_OFFSET;

    static {
        try {
            X_OFFSET = UNSAFE.objectFieldOffset(Foo.class.getDeclaredField("x"));
        } catch (Exception ex) { throw new AssertionError(ex); }
    }

    private int x;

    public boolean tryToDoubleAtomically() {
        int oldValue = x;
        return UNSAFE.compareAndSwapInt(this, X_OFFSET, oldValue, oldValue * 2);
    }
}

This above code can be implemented using VarHandle as follows:

class Foo {

    private static final VarHandle X_VH;

    static {
        try {
            X_VH = MethodHandles.lookup().findVarHandle(Foo.class, "x", int.class);
        } catch (Exception ex) { throw new AssertionError(ex); }
    }

    private int x;

    public boolean tryAtomicallyDoubleX() {
        int oldValue = x;
        return X_VH.compareAndSet(this, oldValue, oldValue * 2);
    }
}

The off-heap methods are primarily as follows:

long allocateMemory(long bytes)
long reallocateMemory(long address, long bytes)
void freeMemory(long address)
void invokeCleaner(java.nio.ByteBuffer directBuffer)
void setMemory(long address, long bytes, byte value)
void copyMemory(long srcAddress, long destAddress, long bytes)
[type] get[Type](long address)
void put[Type](long address, [type] x)

These methods can be replaced by MemorySegment operations. Consider the following example:

class OffHeapIntBuffer {

    private static final Unsafe UNSAFE = ...;

    private static final int ARRAY_BASE = UNSAFE.arrayBaseOffset(int[].class);
    private static final int ARRAY_SCALE = UNSAFE.arrayIndexScale(int[].class);

    private final long size;
    private long bufferPtr;

    public OffHeapIntBuffer(long size) {
        this.size = size;
        this.bufferPtr = UNSAFE.allocateMemory(size * ARRAY_SCALE);
    }

    public void deallocate() {
        if (bufferPtr == 0) return;
        UNSAFE.freeMemory(bufferPtr);
        bufferPtr = 0;
    }

    private boolean checkBounds(long index) {
        if (index = size)
            throw new IndexOutOfBoundsException(index);
        return true;
    }

    public void setVolatile(long index, int value) {
        checkBounds(index);
        UNSAFE.putIntVolatile(null, bufferPtr + ARRAY_SCALE * index, value);
    }

    public void initialize(long start, long n) {
        checkBounds(start);
        checkBounds(start + n-1);
        UNSAFE.setMemory(bufferPtr + start * ARRAY_SCALE, n * ARRAY_SCALE, 0);
    }

    public int[] copyToNewArray(long start, int n) {
        checkBounds(start);
        checkBounds(start + n-1);
        int[] a = new int[n];
        UNSAFE.copyMemory(null, bufferPtr + start * ARRAY_SCALE, a, ARRAY_BASE, n * ARRAY_SCALE);
        return a;
    }

}

This above can be replaced by using the standard Arena and MemorySegment APIs:

class OffHeapIntBuffer {

    private static final VarHandle ELEM_VH = ValueLayout.JAVA_INT.arrayElementVarHandle();

    private final Arena arena;
    private final MemorySegment buffer;

    public OffHeapIntBuffer(long size) {
        this.arena  = Arena.ofShared();
        this.buffer = arena.allocate(ValueLayout.JAVA_INT, size);
    }

    public void deallocate() {
        arena.close();
    }

    public void setVolatile(long index, int value) {
        ELEM_VH.setVolatile(buffer, 0L, index, value);
    }

    public void initialize(long start, long n) {
        buffer.asSlice(ValueLayout.JAVA_INT.byteSize() * start,
                       ValueLayout.JAVA_INT.byteSize() * n)
              .fill((byte) 0);
    }

    public int[] copyToNewArray(long start, int n) {
        return buffer.asSlice(ValueLayout.JAVA_INT.byteSize() * start,
                              ValueLayout.JAVA_INT.byteSize() * n)
                     .toArray(ValueLayout.JAVA_INT);
    }

}

The migration will occur in several phases, each aligned with a separate JDK release. In Phase 1, starting with JDK 23, all memory-access methods will be deprecated, and compile-time warnings will be issued. Phase 2, planned for JDK 25 or earlier, will introduce runtime warnings whenever the deprecated methods are used. Phase 3, scheduled for JDK 26 or later, will escalate the response by throwing exceptions by default when these methods are invoked. Finally, Phases 4 and 5 will remove the deprecated methods, potentially occurring in the same release.

Developers can use the new command-line option --sun-misc-unsafe-memory-access={allow|warn|debug|deny} to manage the deprecation warnings and assess the impact on their applications.

The deprecation of sun.misc.Unsafe memory-access methods are a significant step towards enhancing the integrity and security of the Java Platform. By adopting the VarHandle and Foreign Function & Memory APIs, developers can ensure their applications remain robust and compatible with future JDK releases. The phased approach provides ample time for migration, minimizing disruption while promoting best practices in Java development.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: Payara Platform, Jakarta EE 11 Specs, Open Liberty, Micronaut, Quarkus

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for June 17th, 2024 features news highlighting: the Payara Platform release for June 2024; all 16 Jakarta EE 11 specifications having passed their respective reviews; Open Liberty 24.0.0.6; Micronaut 4.5.0; and two Quarkus point releases.

OpenJDK

Christian Stein, Principal Member of Technical Staff at Oracle, has announced that version 7.4.0 of the Regression Test Harness for the JDK, jtreg, released in May 2024, is now the default version for the JDK 24 early-access builds.

JDK 23

Build 28 of the JDK 23 early-access builds was made available this past week featuring updates from Build 27 that include fixes for various issues. Further details on this release may be found in the release notes, and details on the new JDK 23 features may be found in this InfoQ news story.

JDK 24

Build 3 of the JDK 24 early-access builds was also made available this past week featuring updates from Build 2 that include fixes for various issues. Release notes are not yet available.

For JDK 23 and JDK 24, developers are encouraged to report bugs via the Java Bug Database.

Jakarta EE

The final two specifications targeted for Jakarta EE 11, Jakarta Authentication 3.1 and Jakarta Security 4.0, have passed their respective release reviews this past week. This means that all 16 specifications updated for Jakarta EE 11 are now complete!

In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE Developer Advocate at the Eclipse Foundation, explained that efforts are focused on finalizing the TCK and completing the required changes in the Jakarta EE Platform, Web Profile and Core Profile before the final GA release of Jakarta EE 11.

Spring Framework

It was a busy week over at Spring as the various teams have delivered numerous milestone and point releases on Spring Boot, Spring Framework, Spring Security, Spring Authorization Server, Spring for GraphQL, Spring Session, Spring Integration, Spring Modulith, Spring AMQP, Spring for Apache Kafka, Spring for Apache Pulsar and Spring Tools. More details may be found in this InfoQ news story.

Payara

Payara has released their June 2024 edition of the Payara Platform that includes Community Edition 6.2024.6 and Enterprise Edition 6.15.0 and Enterprise Edition 5.64.0. All three editions feature: optimized Multi-Release JAR class loading for faster application startup and operation; and an improved thread expiration validation to resolve an inconsistent session timeout when using Session Replication with the --lite command line option.

There was also an upgrade to Payara Security Connectors 3.1.1 and 2.7.1 for the version 6 release train, Community and Enterprise, and the version 5 release train, respectively.

Further details on these releases may be found in the release notes for Community Edition 6.2024.6 and Enterprise Edition 6.15.0 and Enterprise Edition 5.64.0.

Helidon

Helidon 4.0.10, the tenth maintenance release, provides notable changes such as: a new inner class, MethodStateCache, defined in the MethodInvoker class that implements a new method caching strategy in fault tolerance; a resolution to handle an invalid end-of-line when parsing HTTP headers and add the appropriate tests; and improvements in validating JWT tokens. More details on this release may be found in the changelog.

Quarkus

Quarkus 3.11.2, the second maintenance release, ships with resolutions to notable issues such as: a NullPointerException due to the setListeners() method, defined in the ShutdownRecorder class, not being called in the when QUARKUS_INIT_AND_EXIT is used; a misspelled URL for a JQuery WebJar resource throws an StringIndexOutOfBoundsException instead of redirecting to an HTTP 404 status code; and a failure in using the Gradle quarkusDev parameter when usage analytics are enabled. Further details on this release may be found in the changelog.

Two days after the release of Quarkus 3.11.2, Quarkus 3.11.3, the third maintenance release, provides dependency upgrades and notable changes such as: compatibility with Maven Daemon (mvnd) 1.0; support for the ISO 8601 date/time format in the HTTP access logs; and a resolution to various issues with the lastModified property using the Quarkus REST extension. More details on this release may be found in the changelog.

Open Liberty

IBM has released version 24.0.0.6 of Open Liberty featuring: faster startup of Spring Boot applications using Spring Boot 3.0 InstantOn with CRaC; and InstantOn support for the Jakarta Messaging specification with IBM MQ and JCache Session Persistence feature.

This release also addresses CVE 2024-22354, a vulnerability affecting IBM WebSphere Application Server 8.5 and 9.0, and IBM WebSphere Application Server Liberty 17.0.0.3 through 24.0.0.5, that are vulnerable to an XML External Entity Injection (XXE) attack when processing XML data. A remote attacker could exploit this vulnerability to expose sensitive information, consume memory resources, or to conduct a server-side request forgery attack.

Micronaut

The Micronaut Foundation has released version 4.5.0 of the Micronaut Framework featuring Micronaut Core 4.5.3, bug fixes, improvements in documentation and updates to modules: Micronaut Data, Micronaut Servlet and Micronaut Micrometer.

This release also introduces new modules: Micronaut JSON Schema, for generating JSON schema definitions from classes at build time; Micronaut Sourcegen, for writing source generators and generating Builder and Wither classes; and Micronaut Guice, that allows the import of existing Guice modules.

Further details on this release may be found in the release notes.

Apache Software Foundation

The twenty-first milestone release of Apache Tomcat 11.0.0 along with point releases, 10.1.25 and 9.0.90, all feature bug fixes and notable changes such as: ensure that static resources deployed via a JAR file remain accessible when the context is configured to use a bloom filter; the default value of the discardFacades attribute, defined in the Connector class, is now true for improved safety; and an update to Commons Daemon 1.4.0. More details on these releases may be found in the release notes for version 11.0.0-M11, version 10.1.25 and version 9.0.90.

The release of Apache Camel 3.21.5 delivers bug fixes and improvements such as: removal of the now deprecated fireEvent() method from the Jakarta CDI BeanManager interface; and an improved JMSCorrelationID message header, defined in the Jakarta Messaging Message interface, to handle message brokers that have bugs. This is the last planned patch release for Camel 3.21 release train. Further details on this release may be found in the release notes.

The release of Apache Maven 3.9.8 ships with bug fixes, dependency upgrades and improvements such as: display the reason(s) why a model builder discards a model; an improvement to the SimplexTransferListener class to handle absent source/target files; and the list of plugins in the validation report are now sorted in alphabetical order. More details on this release may be found in the release notes.

JobRunr

Version 7.2.1 of JobRunr, a library for background processing in Java that is distributed and backed by persistent storage, has been released that primarily fixes a ConcurrentModificationException that may be thrown due to concurrent updates to an instance of a Job class. This completes the transition from Kotlin 1.7 to Kotlin 2.0 by correctly naming the necessary artifact. This version also provides an enhancement that validates an implementation of the JobRequest interface when using the JobBuilder or the RecurringJobBuilder classes. Further details on this release may be found in the release notes.

JHipster

The release of JHipster Lite 1.11.0 ships with bug fixes, dependency upgrades and new features/enhancements such as: a new ElementReplacer interface dedicated to insert text at the end of file; and an improved JHipster Lite logging. More details on this release may be found in the release notes.

Infinispan

Infinispan 15.0.5.Final, the fifth maintenance release, delivers notable changes such as: an optimized lookupResource() method, defined in the ResourceManagerImpl class, for improved processing of resources; a file cleanup in the RocksDB cache store before executing tests; and return an HTTP 400 (Bad Request) response code if a user requests initialization of an internal cache.

OpenXava

The release of OpenXava 7.3.3 ships with bug fixes, dependency upgrades and Maven improvements with new archetypes, openxava-project-management-archetype and openxava-crm-archetype, available in both English and Spanish. Further details on this release may be found in the release notes.

Keycloak

Keycloak 25.0.1, the first maintenance release, provides bug fixes and enhancements: use of a proper Apache FreeMarker template for the refurbished Account and Admin Consoles; and enhanced masking in the CLI with values passed using the --config-keystore parameter.

Gradle

The first release candidate of Gradle 8.9 delivers: an improved error and warning reporting for variant issues during dependency resolution; structural details exposed of Java compilation errors for IDE integrators, allowing for easier analysis and resolving issues; and the ability to display more detailed information about JVMs used by Gradle. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Spring Ecosystem Releases Focus on Spring Boot, Spring Security and Spring Modulith

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

There was a flurry of activity in the Spring ecosystem during the week of June 17th, 2024, highlighting point releases of: Spring Boot 3.3.1 and 3.2.7; Spring Security 6.3.1, 6.2.5 and 5.8.13; Spring Session 3.3.1 and 3.2.4; and Spring Modulith 1.2.1, 1.1.6 and 1.0.9.

Spring Boot

The release of Spring Boot versions 3.3.1 and 3.2.7 deliver improvements in documentation, dependency upgrades and resolutions to notable issues such as: an IllegalArgumentException when trying to use an instance of the Tomcat Http11Nio2Protocol class with Spring Boot-configured SSL; and an instance of the DataSourceProperties class fails to bind if the java.sql module isn’t included. Further details on these releases may be found in the release notes for version 3.3.1 and version 3.2.7.

Spring Framework

Spring Framework 6.1.10, the tenth maintenance release, provides bug fixes (that include regressions from version 6.1.9), improvements in documentation and new features: an instance of the PersistenceExceptionTranslationInterceptor class now defensively retrieves PersistenceExceptionTranslator interface beans to cover scenarios where the translator has not been initialized before shutdown; and support for all “connection reset” exception phrases from the DisconnectedClientHelper class. This version is included in the release of Spring Boot 3.2.7 and 3.3.1. More details on this release may be found in the release notes.

Spring Security

Versions 6.3.1, 6.2.5 and 5.8.13 of Spring Security have been released that ship with bug fixes, dependency upgrades, build updates and new features such as: enhanced logging from within the check() method, defined in the RequestMatcherDelegatingAuthorizationManager class, that did not provide useful information; and an update to the ldap.adoc file to include the required dependencies to avoid issues that developers have experienced while setting up LDAP. Further details on these releases may be found in the release notes for version 6.3.1, version 6.2.5 and version 5.8.13.

Spring Authorization Server

Versions 1.3.1 and 1.2.5 of Spring Authorization Server have been released featuring dependency upgrades and resolutions to issues: a ClassNotFoundException due to AOT hints preventing compilation when using JdbcOAuth2AuthorizationService or JdbcRegisteredClientRepository classes; and authentication for an X509 client certificate enforces the value assigned to the client_id field in the YAML configuration file without first checking on client authentication method. More details on these releases may be found in the release notes for version 1.3.1 and version 1.2.5.

Spring for GraphQL

Versions 1.3.1 and 1.2.7 of Spring for GraphQL have been released providing bug fixes, improvements in documentation, dependency upgrades and new features: support for returning instances of the Reactor Flux class from methods annotated with @EntityMapping to complement existing support for List, Mono and CompletableFuture; and allow the use of GraphQL Java 21.x in the Spring for GraphQL 1.2 release train. These versions are included in the release of Spring Boot 3.2.7 and 3.3.1, respectively. Further details on these releases may be found in the release notes for version 1.3.1 and version 1.2.7.

Spring Session

Versions 3.3.1 and 3.2.4 of Spring Session have been released with dependency upgrades and a new feature that resolves an issue in which a default implementation of the UserDetails interface, User, is returned instead of a user-defined custom implementation. More details on these releases may be found in the release notes for version 3.3.1 and version 3.2.4.

Spring Integration

Versions 6.3.1 and 6.2.6 of Spring Integration have been released featuring bug fixes, improvements in documentation, dependency upgrades and a new feature that provides the ZeroMqMessageHandler class with an optional topic for distributing messages into subscriptions that must be wrapped with an additional empty frame. This would complement the existing default topic. Further details on these releases may be found in the release notes for version 6.3.1 and version 6.2.6.

Spring Modulith

Versions 1.2.1 and 1.1.6 of Spring Modulith have been released featuring: an improved configuration of the ApplicationModuleDetectionStrategy interface via the spring.modulith.detection-strategy property that will accept values direct-sub-packages (default) or explicitly-annotated; a resolution to named interface detection accidentally picking up nested declarations in a nested interfaces scenario; and dependency upgrades to Spring Boot 3.3.1 and 3.2.7, respectively. More details on these releases may be found in the release notes for version 1.2.1 and version 1.1.6.

Spring AMQP

Version 3.1.6 of Spring AMQP has been released featuring dependency upgrades and resolutions to issues: the release() method, defined in the ActiveObjectCounter class, is unreachable due to the SimpleMessageListenerContainer class not having released the consumer variable; and elimination of an interrupted thread after performing target logic by moving the cancelTimeoutTaskIfAny() method, defined in the RabbitFuture class, into a finally block. Further details on this release may be found in the release notes.

Spring for Apache Kafka

Versions 3.2.1 and 3.1.6 of Spring for Apache Kafka have been released providing bug fixes, dependency upgrades and a new feature that adds tracing headers, now mapped to a string, in the AbstractKafkaHeaderMapper class after the migration from Sleuth to Micrometer. These versions are included in the release of Spring Boot 3.2.7 and 3.3.1, respectively. More details on these releases may be found in the release notes for version 3.2.1 and version 3.1.6.

Spring for Apache Pulsar

Versions 1.1.1 and 1.0.7 of Spring for Apache Pulsar have been released featuring numerous dependency upgrades that include: Micrometer Metrics 1.13.1 and 1.12.7, respectively; Reactive Client for Apache Pulsar 0.5.6; and Spring Framework 6.1.9. These versions are included in the release of Spring Boot 3.2.7 and 3.3.1, respectively. Further details on these releases may be found in the release notes for version 1.1.1 and version 1.0.7.

Spring Tools

Less than a week after the release of version 4.23.0, version 4.23.1 of Spring Tools has been released to deliver important fixes such as: adding preferences/settings for enabling/disabling JPQL, HQL and SQL syntax validation as well as severities for syntax problems inside Spring Data queries that were missing; and a StackOverflowException from within the AnnotationHierarchies class upon opening a Spring Boot project in VSCode. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Kubernetes 1.30 Released with Contextual Logging, Improved Performance, and Security

MMS Founder
MMS Mostafa Radwan

Article originally posted on InfoQ. Visit InfoQ

The Cloud Native Computing Foundation (CNCF) released Kubernetes 1.30, named Uwubernetes in April. The release introduced features such as recursive read-only mounts, job completion policy, and fast recursive SELinux label change.

One of the changes in Kubernetes 1.30 is the overhaul of memory swap support for Linux nodes. This improvement is designed to enhance system stability by providing more control over memory usage. Alongside this, the introduction of a sleep action for the PreStop lifecycle hook offers a simplified native option for managing pod termination activities and ensuring better workload management.

Alpha features in version 1.30 include the integration of the Common Expression Language (CEL) for admission control, which paves the way for more sophisticated policy controls and validation mechanisms in Kubernetes clusters. Furthermore, enhancements to service account tokens through Kubernetes Enhancement Proposals (KEP) aim to provide more secure and manageable service accounts, an essential component for maintaining secure Kubernetes environments.

Kubernetes 1.30 also brings beta support for user namespaces, a Linux feature that isolates container UIDs and GIDs from those on the host, significantly bolstering security measures.

Kat Cosgrove, from the release team, commented on Contextual Logging becoming beta in version 1.30:

This enhancement simplifies the correlation and analysis of log data across distributed systems, significantly improving the efficiency of troubleshooting efforts. By offering a clearer insight into the workings of your Kubernetes environments, Contextual Logging ensures that operational challenges are more manageable, marking a notable step forward in Kubernetes observability.

Further scheduling improvements have been made, highlighted by the introduction of MatchLabelKeys for PodAffinity and PodAntiAffinity, which allows for better pod placement strategies.

Also, the decoupling of critical components such as the TaintManager from NodeLifecycleController intends to enhance the overall maintainability of the project.

Additionally, this version presents usability upgrades to the scheduler and new structured authorization configurations, which ensure more sophisticated access controls within Kubernetes environments.

This release also deprecates several outdated features. The regression fixes for open API descriptions of imagePullSecrets and hostAliases fields are noteworthy, as consistency in these fields’ usage is crucial for operational integrity.

Additionally, this version signals the movement away from legacy security configurations in favor of more streamlined and modular approaches.

According to the release notes, Kubernetes version 1.30 has 45 enhancements, including 10 entering alpha, 18 graduating to beta, and 17 becoming generally available.

Earlier this month, the Kubernetes community celebrated 10 years since the first git commit to the project. The event known as KuberTENes was held in many places around the globe with the official one sponsored by the CNCF in Mountain View, CA, and was streamed live on its YouTube channel.

For detailed information on the Kubernetes 1.30 release, users can refer to the official release notes and documentation for a comprehensive overview of the enhancements and deprecations this version presents or watch the recording of the CNCF webinar by the release team. The next release 1.31 is expected in August 2024.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Fast, Scalable, Secure: WebAssembly and the Future of Isolation

MMS Founder
MMS Tal Garfinkel

Article originally posted on InfoQ. Visit InfoQ

Transcript

Garfinkel: I’m Tal Garfinkel. I’m a research scientist at UC San Diego. Did my PhD at Stanford. During and after, I spent about a decade at VMware, from the pretty early days when we’re just a couple hundred people, till it was about 26,000 people. My formative years were spent seeing the impact that virtualization had on the industry, from the point at which it was laughable that everything would be running in a virtual machine and VMs slowed things down by like 30%, to the point where it was a given that, of course, you’re going to be running in a VM.

Overview

I’m going to be talking about a few different things. I’ve tried to break this into two parts. First, I’m going to talk about, what is WebAssembly as an isolation technology, and how is it being used? What I was hoping for here is that even if you’re not into the gory details of Wasm or you haven’t used it before, you’re all come away with some idea, this is where Wasm might be useful in a project for me, for a security project, for some extensibility piece, for some serverless piece. Someplace where like, this fits, because Wasm is this distinct point in the design space. It’s not containers. It’s not VMs. It’s this other thing that you can use. The second part, I’m going to talk about the research that I’ve been working on recently. This is, how do we push beyond Wasm’s limitations. Wasm is doing something fundamentally unnatural. We’re taking software and we’re using it to overcome the limitations of hardware. Again, this is close to my heart, because of my background in virtualization. If you know the history of virtual machines, VMware when it started out was virtualizing the x86, which was something you could not do. The x86 as an architecture is not virtualizable. Instead, what VMware did was use a combination of low-level hardware features and binary translation to make the x86 virtualizable. This is how we were able to get virtual machines. For the first five years or more, there was no support from the hardware, and this was the only way to do it. Eventually, hardware caught up, and we got to this world where writing a virtual machine monitor is a project for a graduate student. I see this nice parallel between Wasm, where, again, we’re using the compiler and software to overcome the limitations of hardware.

Isolation: Fundamental to How We Organize Hardware

Isolation is fundamental to how we organize systems. We have two different technologies to do this. We have hardware-based isolation which relies on page tables and protection rings. This is what processes, VMs, and containers are built on. It’s language independent, bare metal speed. Unfortunately, it’s designed to be relatively coarse grain. Communication between different protection domains is quite slow. Starting one of these things up is quite slow. It has a relatively large resource footprint. Fundamentally, hardware-based isolation today is not about doing fine-grained things. The other place where we see isolation, we take it for granted, I think often is within programming languages. This could be modules, functions, or in programming languages like Erlang and OTP, and processes, we have soft processes. These are language independent and they impose runtime overheads. They can be very fine grained, where we can start and stop these things in nanoseconds. They have a very small footprint. They really match how we think about things. WebAssembly is based on a technology called software-based fault isolation. Software-based fault isolation is an idea that goes back to the ’90s. This was a few years before I got to Cal as an undergrad. This was in the heyday of extensible operating systems. People were thinking about microkernels. People were thinking about writing kernels in safe languages. There was this, how do we get more flexibility into our systems? One of the ideas that came out of this was instead of using hardware to enforce protection, what if we use the compiler? SFI exists in this liminal space. It is relatively language neutral because it has this model of memory that looks very much like the underlying machine, but it lets us get these nice properties that we usually get from software-based isolation. Orders of magnitude faster context switches than we’re going to get from VMs or containers. Orders of magnitude faster startup time, small resource footprints, massive scalability. This is what makes companies like Cloudflare and Fastly able to do what they do with this hyperscaling that they do with edge based serverless.

What is WebAssembly (Wasm)?

What is WebAssembly? It’s a platform independent IR, that you can AOT or JIT. It isolates components within a single address space, generally like within a single OS process. How is this possible? It takes a lot of work. It started out as a web standard. You have to get all these various languages to target this new VM. You have to grow the ecosystem of tools to support it, and then build a community around it. This is where we are today. We’re about six years into this dream from when this first emerged, and all the major browsers supported it. Now it’s seeing adoption on the edge. A lot of folks are starting to use it in different applications that I’m going to talk about next.

How is Wasm Being Used?

How is Wasm being used? The first example I’m going to talk about, which is not the biggest example, but it’s the one that’s close to my heart, because it’s the stuff that we’ve done is library sandboxing. Firefox uses WebAssembly to isolate buggy libraries. This has been a close to, I think, like five-year project now, from talking to our friends at Mozilla, and recognizing there’s this problem. When you’re using your browser, when you’re going to a website, your browser is rendering all sorts of different content, media, XML, audio, all sorts of different formats. There are dozens of libraries to do this. They use these libraries for spell checking, tons of third-party code, dependencies. These are all written in C. When any one of these has a vulnerability, that can compromise your render. For example, let’s say I send you an image file, that particular library has a buffer overflow in it. I pop that library, now I have control of your render, and I have control of that application. I have control of your email client. I may also have control of everything else in that site, because we do have site isolation in browsers. Maybe I’ve got mail.google.com, but maybe I also have pay.google.com for you. Maybe I also have cloud.google.com for you. It’s a very serious class of attacks. What we found is that it’s actually not easy. We had to build some tooling and infrastructure to enable this, but that with the appropriate tools, we could start sandboxing these libraries. This has been shipping in Firefox since 2020. Today, if you’re using Firefox, a bunch of those libraries are actually sandboxed. This is not particular to Firefox. Every application out there depends on C and C++ libraries. This is where most of your bugs live is in dated code. A majority of our bugs are in these C and C++ libraries. Most of the safe languages that you use rely on these libraries. You may be like, I don’t have to care about this, I’m writing in Rust, or Java, or Python, whatever language you depend on, you probably have a large number of dependencies that are actually just C libraries that have been wrapped by these languages.

What we’re doing now as part of our research is working on, we built these tools for C++, for Firefox, but how do we get this out there and actually into the ecosystem? How do we build this as something that you can opt into, like when you pull a crate on cargo, or when you pull an npm on Node to say, I want to opt into the sandbox version of this library. Again, there’s lots of other examples of how you can use Wasm. A big one of these is serverless. Today, again, edge platforms use this for hyperscaling. This allows them. Every time these guys get a packet, they spin up a new instance, because they can get low context switch overheads where they can run many of these things concurrently without paying the context switch tax. They can start these up in microseconds, and they can use orders of magnitude less memory than they might end up if they’re using a VM. Again, plugins and extensibility. This is a huge area. If you’re using a service mesh, like Envoy and Istio. Shopify has used this for running user code server-side, stored procedures, that way of using this, and Kafka. There are all sorts of places where you’re like, I would like some safe extensibility. I want to put some code close to my data path. Wasm is a great tool for that. Another place that Wasm is really catching on is in the IoT space. Because you have this whole diversity of different devices, and you have your developers that are writing your application code, and you don’t want them to have to worry about all the fancy details of the different compilers, different devices. Wasm’s platform independence is being used as this layer to shield people who are writing the application logic from the particulars of the different IoT devices. I think this is going to be very big, as we see more intelligence embedded in different devices, this model where, whether it’s Microsoft or Amazon or someone else is dealing with software updates, and distribution, all these things for you, and you just build your business logic in Wasm. I think that this is going to be a really important model for using Wasm.

What Are Wasm’s Limitations vs. Bare Metal?

This sounds great. You’ve got this really lightweight isolation technology. How does this compare to running on bare metal? The reality today is there’s a lot of limitations. One of these is performance overheads. Another of these is scaling. Again, out the gate, Wasm is still better at scaling than your traditional hardware solutions. It has some limitations as well. Spectre safety is a big deal. Who knows why the little ghost is carrying a stick? It’s a branch. Finally, compatibility is an issue. Essentially, Wasm is paravirtualization. Back in the day before we had virtualization hardware, it was like, let’s change the operating system. This is what Wasm is doing, but it’s like, let’s change the compiler to output this new IR. Some of that is like, we’re dealing with, and some of that’s fundamental. Just quick review before I get into this. Who all remembers undergrad operating system? Does everybody remember page tables, and TLBs? Page tables are incredibly powerful. They’re like this amazing thing. You can use them for compression. You can use them for live migration. You can use them for overcommit. It’s a really powerful abstraction. Because as we know, in computer science, we can solve any problem with a level of indirection. This is what page tables are. The problem is that our TLB is a bottleneck. It is a bottleneck in terms of how many things we can shove through there, because if we have too much concurrency, it becomes a point of contention. If we want to modify our TLB quickly and dynamically, things get weird.

Something you probably don’t see every day, but again, you probably read about in undergrad. Who all remembers segmentation in x86-32? The other choice we generally have over architecting systems is segmentation. There are other forms of memory management out there, but this is a big candidate. Segmentation works differently. It is one of the earliest forms of memory management. Here we have this notion of base and bounds. It doesn’t have the issues of the TLB, it has its own limitations. Our model is, when we do an address lookup, we do it usually relative to the start of a segment, and this segment has permissions and whatnot. It’s an alternative scheme. Why am I talking about this? A lot of the limitations that VMs and containers have that are fundamental have to do with the fact that you’re dependent on page tables. This limits scale and it limits dynamism. If you want to create these things, if you want to destroy them, if you want to resize them, you will shoot down your TLB. This is expensive. Again, if you end up with many of these things, you wind up contending for your TLB. I review papers and sometimes I get these papers and they’re like, we have address space identifiers, and so we can tag things in our TLB. This is how many of them are. I’m like, the number you’re telling me is actually much larger than your TLB physically is, so you’re going to be flushing your TLB. Again, TLBs are incredibly powerful but they do have these inherent limitations. The other limitation are context switches. Part of this is protection ring crossings, but protection ring crossings are actually not that bad. Maybe they’re in the tens of nanoseconds. The scheduler is pretty expensive. If you have to buy into the OS scheduler, that’s going to cost you. Doing a heavy weight save and restore is going to cost you. There’s also going to be cache pollution. There’s all these fancy things if you have to go back and forth between the operating system, when you want to switch between protection domains, it’s going to get way more expensive than if you can just basically do a function call, which is what’s happening in something like Wasm. I make these comparisons to lambda sometimes. You can spin up a Wasm instance in a couple microseconds. With a lambda, you’re talking like 100 milliseconds. Then there’s OS overheads, there’s language runtime overheads, whatever funny thing you’re doing in the virtualization layer. Recently Microsoft has Hyperlight, their very lighter hypervisor that they’re going to use to run Wasm in. Their startup times they’re saying are like 100 microsecond scale, which is still a couple orders of magnitude off of what we can get with just straight up Wasm.

What is the alternative? The alternative is a software-based fault isolation. To me, software-based fault isolation is poor man segmentation. Where like, we don’t have hardware segmentation, so we’re going to do this in software. How do you do this in software? Two ways. The simplest way that you could think of to do this is just add bounds checks to every load, restore. The problem with this is that this is expensive. You’re adding high cache pressure. You’re adding the cost of executing those checks. You’re tuning up general purpose registers to keep around those bounds. You’re easily looking at a 40% to 100% slowdown on real benchmarks. Of course, you’re not Spectre safe. That’s mostly not what people do. If you look at a production Wasm implementation, if you look at what’s happening in your browser, if you look what’s happening server side, what people are doing is they break the address space up into guard regions and address spaces. In Wasm, I’m doing 32-bit memories in a 64-bit address space. I can break this up into these 8-Gig chunks. The first 4 Gigs is my address space, the second 4 Gigs is a guard region. When I’m doing addressing in Wasm, so if I’m doing 32-bit unsigned loads, I take that, I add that to the beginning of my memory base address. Now I’ve got a 33-bit value, 8 Gigs. It’s either going to land in my address space, or it’s going to land in my guard region. What did this do for me? This lets me get rid of my bounds checks. I still have to do that base addition. I’m still adding an instruction to every load, restore. Every load, restore is going to be base addition, and then the actual load, restore. It helps a lot. It helps like getting us down to maybe 15%, 20% range. It’s a big win. This also has limitations that it brings, though, unfortunately. I’m still involved with the MMU, have not escaped first setting up and tearing down memory.

There are other limitations I got. One of these is the guard regions, they scale poorly. On x86 in user space, I’ve got 2 to the 47 address space. Each of these 8-Gig regions is 2 to the 33. I do my math, I’ve got 2 to the 14, or about 16,000 instances that I can put in here. I can arrange things a little more cleverly, and get about 20,000 instances. This sounds great. The problem is that when I’m using this in a serverless context, again, I’m spinning up one of these every time I get a number of request. Each of them is often running under a millisecond because I don’t have all those funny overheads with lambda anymore. It’s just running my code. Filling up an address space is pretty easy. I fill up one process address space, where I could start another process. Then I’ve got context switch overheads, then I’ve got IPC overheads when I talk to them, if I want to chain functions. I’m getting back into the space that I was trying to get away from. The other thing is the component model. A really great thing about WebAssembly is the component model. It’s this new thing, where instead of running your application in one address space, you’ll be running your application in multiple address spaces so you can have components. You can have your dependencies broken into separate, isolated things. It could be one library. It could be multiple libraries. We’re talking about potentially an order of magnitude or more increase in the number of address spaces per application. We need to do something about the scaling limit. The other problem with this guard region scheme, we can’t use it with 64-bit memories. We’re back to conditional checks. We can’t use it with older processors, we can only play it under these particular conditions. Finally, these tricks that we’re playing still have overheads.

The final limitation, again, is compatibility. This is a big one today, if you’re going to be using Wasm for real projects. I can talk about what you can do now, what you’re going to be able to do in a year, what you can really do in two years. Because you’re presenting this interface to programming languages, to compilers for them to target, so you need to provide a rich enough interface to get them what they need. You also need to provide a rich enough interface that they can exploit all the functionality in the hardware. Some of this is getting better standards. Some of this is just fundamental to how Wasm works, that we’re not going to overcome. For example, I don’t think dynamic code generation is anywhere in the near future. You’re never going to be able to do direct system calls, although a lot of this stuff can be supported by WASI. You’re never going to get your platform specific assembly to magically run in Wasm. Although we have some tricks for dealing with intrinsics.

Doing Better with Current Hardware (Segue and ColorGuard)

Two things. How do we do better with current hardware? Then, how can we extend hardware to overcome these limitations? By extend hardware to overcome these limitations, I mean the limitations that Wasm has, but all the way to, how do we get these properties that Wasm has? Super-fast startup, teardown communication, like all this goodness, without even having to get into Wasm, just for native binaries, for whatever code you run. First, optimizations. I have two things that I’m going to talk about that we’re doing today, using hardware in dirty ways to make things go faster. First one is an optimization called Segue. It’s super simple. It takes about 25% of Wasm’s total tax away. The second one is one called ColorGuard. This addresses that scalability problem I mentioned. It lets you go from the 20k instance max to about a quarter of a million. Back to segmentation. Segmentation was largely removed when we moved to x86-64. Intel was off doing Itanium. AMD was like, never mind, here’s the new standard. As part of that, they dropped segmentation. I’m sure it simplified some things at the microarchitecture level. There were good reasons for doing this. What we’re left with was segment relative addressing, and two segment registers. The rest of our segment registers now don’t do anything. You can still use ES, CS, DS, but they just are base to zero. We still use these segment registers. If you’re using thread-local storage, you’re addressing things using segment relative addressing with one of these registers. It’s going to be FS in Linux, and I believe it’s going to be GS in Windows.

As it turns out, we can use this amazing register for other things. You can actually use both of these for other things. A Wasm runtime, you know when you’re going to be handing control back to something that’s going to be using thread-local storage. The trick is we just take that base addition that we’re doing. With guard regions, we do a base addition, and then we do a load, restore. We move that heap based into a segment register. Then we just do segment relative addressing. One instruction became two. One general purpose register you were burning, you can just use a segment register you weren’t using anyway. You’re getting rid of instruction cache pressure. You can see, our extra instruction goes away, frees up an extra register, frees up some extra flexibility for the compiler. As it turns out, we get this big code size reduction, which was a pleasant surprise to us. We get some nice performance reductions on spec. On specific workloads, we get even bigger jumps that I’ll mention a little bit later. This has actually landed in WAMR, which is Intel’s Wasm compiler and runtime, just recently. I flagged that thing, reduces compilation time of JIT and AOT. I have no idea why it makes it faster. It’s like one of these things when people deploy it, it’s like, “That was great. I didn’t even know we were going to get that.”

The other trick that we came up with is called ColorGuard. ColorGuard takes advantage of a capability on Intel and AMD processors, called memory protection keys. This has been in Intel processors for quite a while and in AMD processors since EPYC Milan. If you’re on older AMD hardware, you might be a little bit sad, but this is relatively widespread. The way that this works is, for every page, you associate a 4-bit tag. We usually think about those tags in terms of colors. Then each core has a tag register. If the color in the register matches the color on that page, then you’re allowed to access it. If it doesn’t match, then you’re not allowed to access it. We have this new cool production mechanism. What I can do with this mechanism is I can eliminate that waste that I had. Going back to that picture that we had in our mind, we have 4-Gig address space, and a 4-Gig guard region. Let’s say we take that 8-Gig region and we break it into colors, then we say 500 Megs is going to be red, the next 500 is yellow, next 500 is blue, next 500 is green, till we fill that up. Now what we can do, is we’re using the address spaces of our other VMs as our guard regions, and we want to context switch, we just switch the color of that register. All of a sudden, we’ve eliminated that waste. Now, again, we’ve got this big win, and we can run order of 12x more instances, in the same amount of space.

Extending Hardware (Hardware Assisted Fault Isolation)

This is fun. This does not address all the performance limitations. It doesn’t even address all the scaling limitations, even with that cool trick, because again, we’re restricted to our 15 colors and our 8 Gigs. It’s nice. How do we go further? After working with this stuff for a few years, and beating our head against the limitations of Wasm, we’re like, we just really want hardware support. What do we want from that? We want three things. Number one, we want a really simple microarchitecture. The reason for this is that it’s really hard to get changes into real processors. There you’ve got a very limited budget in terms of gates, if you’re close to the critical path. We worked with folks at Intel, and we went back and forth a lot in terms of like, can we have this? They’re like, no. We’re like, ok, let’s figure out how to work around that. Another thing we wanted is minimal OS changes. It is really hard to get Linux to do things. It is really hard to get Windows to do things. For example, MDK is not supported in Windows, and I don’t think it’s ever going to be. There are lots of funny, silly things, if you like hang on, on a kernel mailing list, you’ll be like, that is really small. I’m shocked that it is taking this long to get supported. Sort of like, we don’t want to be involved with that. The third thing we want to do is this ability to support both Wasm and other sorts of software-based fault isolation against the other sorts of compiler enforced isolation. For example, V8 now has its own custom way of doing this because the V8 JIT is an area that has lots of bugs. Of course, we want to be able to support unmodified native binaries as well, so kind of the whole spectrum of these things.

Our solution to this is an extension we call hardware-assisted fault isolation. HFI is a user space ISA extension that gives us a few key primitives.

One, it gives us bases and bounds. This ability to say, here’s a set of data regions, or here’s a set of virtual address ranges that I can grant the sandbox access to. I set up my mapping of virtual address regions. I say, I’m going to enter the sandbox. Once I’m in the sandbox, that’s the only memory we can access. I execute my guest code, my sandbox code, and then when it calls exit, it hands control back to our runtime. There’s more details, but that’s the essential idea. It’s dead simple. What HFI provides for us is super-fast isolation. Those bounds checks are executed in parallel with your TLB lookups. They add no overhead to native code. It is what you expect with your page table as well. We get very fast system call interposition. This gives us some forms of Spectre safety. It gives us unlimited scaling. It doesn’t have any of those constraints that we had before. We have a small amount of on-chip state, we context switch that state, and we’re done. We can set up teardown and resize sandboxes very quickly. This is just a change of registers. Finally, of course, this is compatible with existing code, we don’t have to get into the whole huge game that we’re playing with Wasm.

One challenge, so this sounds like a nice idea. You’re like, great, I want to do upper and lower bounds check. I’m going to implement regions as just segmentation. I’m just going to grab 264-bit comparators. I’m going to do an upper bounds check. I’m going to do a lower bounds check. Amazing. We went to an architect and told them, we’re like, this is what we’re going to do. You kind of play kidding, they take away your lollipop and send you home. We’ve got a limited gate count, we cannot slow down existing pipeline stages, because that’ll slow down everything. We can’t put complex circuits in there. We’re right on the data path. This has been described as the Manhattan of chip real estate. How do we do these checks in a way that requires very little silicon? Our answer to this is that you specialize regions to a few different types. We have two different types of regions. Again, going back to the world of x86 segmentation, remember, we used to have segments that apply to all memory accesses, and segments that apply where you use segment relative addressing. Regions are the same way. We have explicit regions where you do relative addressing. The nice thing about this is that we can do this with 132-bit compare, because it’s always like one sided. I get all the expressiveness with my x86 mov instruction, except I’m going to constrain it to say that my index has to be positive. Doing that, I can just do a one-sided comparison. I can do this with very little hardware. This only works, though, if I’m doing relative addressing. The other thing I want is to be able to just apply this to everything. How I’m going to do that is I’m going to give up some granularity. With my explicit regions, I can be like, I have a large region and a small region. With my small regions, where I can address up to 4 Gigs at a time, I’m going to have byte granular addressing. This is really important for doing sandboxing. When we send logs libraries in Firefox, we do not want to change them, somebody else maintains that code. We want to be able to say this is a data structure, here is where it lives in memory. I’m going to map that into my sandbox without changing code for that. I have to have byte granular addressing for larger things. Again, with Wasm, I’m doing things relative to the base of linear memory. Actually, I want this because with Wasm, I can have many linear memories. I’m going to use my region relative addressing for that, but there I only need to grow it in 64k chunks, because I’m growing my heap. Then with implicit regions, I can use this for code, I can use this for static data, I can use this for stack. Because I’m using masking, I have to be power of two aligned and sized. Often, I can deal with that extra slop. What I get for that is very cheap implementation.

Another thing that I really wanted early on with this, was system call interposition. The reason for this, it makes the things a lot simpler. I’ve written probably six different systems that do system call interposition. I’ve done library interposition. I’ve written a new process tracing mechanism for Linux. I’ve used Linux ptrace. All of those experiences were terrible. Modifying the kernel is terrible. Using the existing thing is really expensive and slow. Library interposition is just really fragile. When you can change the processor, you can do something really simple. You can either just change microcode or implement this with conditional logic. You can just say, every time I see a privilege instruction, syscall, sysenter, int 0x80, whatever, turn that into a jump. Just return control back to my runtime. Another thing that we get from this that’s really nice is that we get Spectre safe bounds checks. When we’re doing bounds checks in software, our processor does not know about this, so we’ll happily speculate them past them. Now that we’re doing things in hardware, we can architect this. Whether a memory access is speculative or non-speculative, a bounds check will get enforced. That means we’re never going to speculatively access something outside the bounds of our sandbox. We’ve got our sandbox memory region. Even if I mistrain the processor, it will not speculate beyond the bounds that are being enforced by HFI. This does not solve all of Spectre. There is no easy solution to Spectre. Out of order execution is deeply ingrained in how we build processors. This relies on prediction structures, the PHT, the RSB, the BTB, things like, is this branch going to execute? When is this branch going to execute? Branch prediction is key to how you go fast. You need to keep on going, keep on fetching into your pipeline. We can’t just go and flush those structures. We can’t solve this. Nobody else seems to have a great solution either.

We’ve got these primitives, they work for Wasm and other SFI workloads, and for native binaries. We just use them differently in these different scenarios. Two different uses models, but essentially the same primitives. How does it work in terms of performance? In simulation and in emulation, and in emulation we validate with our simulation. We get a huge speedup over doing bounds checks, which is great for 64-bit memories. Over guard regions, we get a little bit of a speedup, which is nice. We also get all these nice properties in terms of scaling, Spectre resistance, small address spaces. On some workloads, reducing register pressure and high cache pressure, and all those good things I was talking about before. Get some really big wins, like here with image rendering in Firefox.

Questions and Answers

Participant 1: You talk about the lightweightness context switching is, and switching easy. The examples you have, like sandboxing in Firefox, that’s a highly collaborative workflow. You’re going to call a function anyway. You sort of know how much [inaudible 00:37:46]. Essentially, you’re just really estimating them. Is there a scenario where you will have all of like different applications that potentially some are malfunctioning? Is there a need for some sort of a scheduler? How sophisticated does the scheduling need to be? Because in a real kernel, straight away, it will enforce certain kind of resource isolation, if the processes and the order can be interrupted, so things like that. Is this a concern for Wasm as well?

Garfinkel: This is an area that I’m actively working on and interested in. It gets into like, how does user space thread scheduling work? How does concurrency work? Wasm, the core mechanism is just like, it gives you address spaces, it gives you isolation. Then you can put that into the context where you have concurrency, but that’s not part of the standard. The way that most folks do this today, like Fastly, or other folks, is they’re running their Wasm runtime, and then Tokio is running underneath. They’re using Tokio underneath to multiplex this thing. The Wasm runtime will have its own preemption mechanism built in. There are performance challenges associated with that, which is something I’m actively working on.

Participant 2: I wanted to ask about your thoughts on the recent WASIX runtime and what that might mean for multi-threaded applications of Wasm, for being [inaudible 00:39:40]?

Garfinkel: There’s two aspects of threading support. One is like, how do you do atomics and things like that? That’s the Wasm thread standard. It’s pretty far along. The other thing is, how do you create threads? The challenge around that has been, that there’s a difference between, how do you do this in browsers and how do you do this on the host side? Browsers are like, we have web workers. All of us who are more focused on the non-web case, we’re like, that’s great for you, but we need some way to create threads. What’s happened recently is there’s this WASI threads proposal, which is out, and it’s supported now in Wasmtime, probably in some other places too. WAMR has had its own ad hoc support for threads for a while. There are things that I don’t love about the WASI threads proposal, because you actually create a new instance per thread. It really lit a fire under people’s asses to solve the problems we needed to solve to get the right threads proposal. This has to do with how browsers do threads. Because a Wasm context involves not just the linear memory, but also certain tables. You can just say, this is what I can safely share, this is what I cannot safely share. The standard has to make its way along. Threading support is coming along. WASI threads is something that you can use today. If you have a use case where you need threads, I think it’s doable. You’re not totally unsupported. This is a good thing about the ecosystem, there’s the parts where you’re right on the path. There’s the part where it’s a little bit rocky, but you’ll be fine. You’ll get the support you need if you’re doing a real application, and there’s like off in the weeds.

Participant 3: The little examples where the Apple silicon was in the menu of slides. Was there any specific differences?

Garfinkel: In terms of the optimizations?

Participant 3: The general idea was for simplicity, but [inaudible 00:42:21].

Garfinkel: Arm doesn’t have segmentation. That’s not something we can use. Armv9 will have two forms of memory tagging. Armv9 will have something called permission overlays, which is kind of like MPK but a little bit better because it gives us Exe permissions as well. It’s a very cool primitive. It will also have something called memory tagging extensions, where we can tag things at collections of byte granularity, so even finer granularity. We think that we can use both of those to pull off this trick. We haven’t written the code in the paper yet. There is no Armv9 hardware out there, except for I think in mobile phones right now. I think the third-generation Ampere stuff will have it.

Participant 3: Someone wants to start dipping toe, and start working with Wasm, what would you suggest? Where do you go? What tool?

Garfinkel: There’s two tool chains that I think are really solid, and then the third one that we work on, that we use for Firefox, which actually is good for specific things. Wasmtime is super well supported, and I think rock solid from a security standpoint. That’s what Fastly builds on. That’s what a lot of folks build on. I would grab that. There’s WASI Clang for doing C and C++ stuff. Rust has really first-class support. It depends on your language. You need compiler support. You compile whatever your code is to Wasm and then compile your Wasm to your local binary, if you’re AOTing, or you can do your JIT thing for whatever reason you want to do that. I think it’s always good if you have a need ready, you have a project, like something that is cheering on you that you’re like, I need to solve this thing. There’s Bytecode Alliance. Most of the community is on Zulip. You can ask core technology questions there. It’s a pretty tight and pretty supportive community, kind of like the community around various programming languages. If you have an application, I think you’ll definitely find the support that you need around that. If you’re interested in the sandboxing stuff, like sandboxing native code, you should talk to us.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Making Change Stick with Neil Vass

MMS Founder
MMS Neil Vass

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast.

Today, I’m sitting down with Neil Vass. Neil was a recent QCon speaker at QCon London and is based in the UK, and today I’m sitting down in Australia. Neil, welcome. Thanks for taking the time to talk to us.

Neil Vass: No problem. Thanks for having me on the show.

Shane Hastie: My normal starting question is who’s Neil?

Introductions [01:16]

Neil Vass: I’ve worked in tech for 18+ years, I think. I’m scared to go back and count exactly. I was a software developer for quite a long time and then gradually moved into, partly because nobody else was doing it, things like delivery management, the product side, the why are we doing this anyway, and moving towards talking to stakeholders about do you know what you actually want? That would help. And can these teams work together well?

I’ve worked at Tessella, which was a tiny company that nobody’s probably heard of, but worked with all huge big conglomerates and different companies around the world doing such scientific research things, so working for AstraZeneca, Unilever or other people about what supports their scientific research was quite an interesting topic to work on. After that, it was the BBC, which is British Broadcasting Corporation, which is probably famous around the world for spinoff services like Dr. Who and stuff, and for the last six or so years I’ve been at the Co-op, which is 65,000 people, 180 years old, and has lots of different businesses about food stores, funerals, insurance and all sorts. I’m an engineering manager now, which means I line-manage engineers, I make sure we’ve got the right people and skills on teams and help talk to teams about improving how are they working and what would we like to get better at next.

Apart from work, I’m from Scotland, originally, which is in the north of the UK. I moved to Manchester for University a scary amount of time ago, so I’ve actually been in Manchester longer than I was ever in Scotland, which was an odd landmark to pass. But I think I’ve kept my accent, and I hope people can understand Scottish okay. I live in Manchester, which is quite a big city, and I’ve got a wife, two kids, a dog, and a sourdough starter is the latest addition to the family.

Shane Hastie: Nurture the sourdough starter. A good sourdough is a wonderful thing. Your QCon talk was Making Change Stick: Lessons Learned From Helping Teams Improve at the Co-op. Why do we need a talk on making change stick? Don’t we just tell people to do things and they do it?

Changes imposed without communicating the why [03:20]

Neil Vass: That’s a really, really good question and is something I touched on in my talk. I think, if you get into the right position of authority, you can probably make people do all kinds of things, but you can’t make anyone care, and you can’t make them find it useful. I think where we fall down loads of times is we say the standard is this, and it could be anything from you have to do sprint planning, you have to do retrospectives, you have to show me the evidence you’ve done them. Honestly, if you’re not getting anything out of them, you either need to change how you’re doing them or you need to stop.

I think huge amounts of waste are generated in the industry by all kinds of things that we say people need to do, but I want you to care about it, I want you to find it useful and I want you to experiment with it. If it’s not quite fitting for you, why not? What’s going on? I think in the Agile world, we’ve realized like in software, Agile and DevOps and Lean and people-focused approaches are the right way to do it. That works better, and we’ve made a caricature of the old style, the waterfall project management.

But one thing that’s interesting about that is there are some good ideas in how waterfall works, like make sure you’ve thought about your risks, make sure you’ve planned things upfront and you know what your unknowns are, but that easily morphs into getting through stage gates by weight of documents. If you’ve got six kilos of risks here, you must be on it. Or everyone knows that that plan’s nonsense, but we’ve submitted it, so we have to say we’re sticking to it. If we’re not doing waterfall well, what makes us think we’ll do Agile any better? So, you can easily slip into the same kind of you’re doing the outward signs of what’s useful and it’s just not. Get curious about things, work out what practices could help solve your problems.

Making change stick [04:54]

My last topic about why making it stick is important, I said in my talk there’s multiple talks you could give about who am I to give people advice anyway? Did they actually want your help? Help would be useful if you’re helping multiple teams, and you’re not there every day to say, “Not quite like that. It’s this.” There’s loads of things there. But the particular challenge I was looking at, and I think it was interesting for me to reflect at, and I’ve heard other people struggle with, is where somebody’s had a challenge, you’ve talked through what might help, you’ve helped them implement it, and they’re like, “That’s great.”

You come back a couple of months later, and maybe the same people, maybe a few people have changed on the team, or maybe it’s the exact same one, were saying, “I’m struggling with this.” I said, “That sounds like the same challenge. What about that thing you started doing that fixed it?” “Oh, we’ve kind of stopped.” Often, it’s vague. What was the reason? Has the kind of work changed or is the person who was driving it gone, or did it just fall out of the habit?

Sometimes, you think, “I’ve helped you. I’ve gone and helped you. I’ve gone and helped you,” and I look back at the first one, and they’re like, “Ah, I’ve made no progress at all.” If it was just a me thing, I’d worry about it myself, but it’s something I hear lots of people talk about. It’s something I see other people struggle with too, and the teams, themselves, are sometimes at a bit of loss to, “We’ve solved this once. Why am I fighting the same fire again?” I thought that’d be an interesting focus to put on as I looked through my career at the Co-op.

Shane Hastie: What are some of the reasons that this does happen? Because I agree, I’ve seen this pattern, and it’s a common pattern across many, many organizations and many teams, and I would say, it’s not restricted to large companies. It’s large, small, intermediate, so what’s happening there?

Solving the same problem over and over again [06:28]

Neil Vass: It’s an interesting one. I’m not sure I’ve got the full story of it. You could either let it depress you or let it give you some strength is that people who really know what they’re doing, like industry luminaries, struggle with it just the same. I had an example of Henrik Kniberg, who first popularized the Spotify model and a lot of other things, he wrote a really interesting book called Lean from the Trenches, which is a detailed case study about how they made Lean thinking and Kanban work for them, and it talks about the people, the managers, the stakeholders, what do they work out and what’s doing. At the end of the book, you’re like, “This lot know it. They really understood what wasn’t working for them before and what works for them now. They’ve cracked it. That’s good, that’s how you solve things.”

Then in his follow-up presentation, once he’d moved to Spotify, he’s got a slide in there about a train just going off the rails over a cliff, and this was the story of the very next project by the very same group of people, all the same stakeholders and bosses and everybody forgetting everything they’d learned. It was like, “We’ve talked about this. What’s wrong?” You could think, “Well, nobody can make change stick,” but I’d prefer to think it’s just hard. It’s not that I’m not good enough, we’re not thinking about things enough, or where I work isn’t right, it’s something we all struggle with.

One of the reasons for it, I think, if you want to check you’re still doing something, it’s easier, it takes less energy to make sure the outward signs are still there rather than think about, “What were we doing this for, and are we still getting that, and what problem was it trying to solve for us, and do we still have that problem?” The things can shift without realizing.

One pattern I saw as I looked to helping multiple teams at the Co-Op was we’d put in place, “Here’s how you kick off a team. Here’s how you should set up your goals. Here’s the things to think about and team charter and things like that.” But over the course of just a couple of years, we’d moved in the part of the organization I was in, from everything was discovery or a very early stage, “We’re just prototyping,” or, “Things are new, and we’re working it out.” You can see the problem and you can see a sort of end stage. We’d shifted without really noticing it across multiple teams to almost everything was a long-running live service, which means you’re now looking to incrementally improve rather than have focused, “This is a new thing.” You’ve got teams that have been around for a long time, and even if the same people are on them, you’ve kind of forgotten some of the reasons you talked about. It seems we’ve said this stuff, we’ve talked about what was useful and why we’re adding things, feels like boring or unnecessary or patronizing to repeat it.

I think people don’t realize, and the number of times I have learned this lesson and then had to repeat it to myself, because it didn’t quite stick, is that thing about being a broken record about when you are bored of talking about a thing, people are just about starting to listen. I think that works with ourselves as well. Outward signs might persist, and as people get busy, you only do that outward thing or you let things drop altogether. I suppose the last one is when we think about improvement, often what we think about is what can it add, what new process or technique or habit is going to do it? Before you know it, the whole week’s absolutely full of all the wise and wonderful things that you’re doing to help yourself that we’re not getting any work done anymore.

Shane Hastie: One of the things that certainly I’ve seen is you’re right, we’ll bring in a new process, we’ll bring in a new thing that we want to do because there is value in it, and time gets full. We do get busy. How do we choose what we stop doing?

Change involves stopping doing things, not just adding new elements [09:47]

Neil Vass: That’s a good one. One exercise I’ve used on one team I got into a good habit, it was a quarterly what have we achieved, what have we done, what did we try that didn’t work? But also a good layout like make a mock-up of your team’s calendar on the wall and a mock-up of your team’s Kanban board or whatever you use, things that you recognize. We used color-coded Post-Its. I think we had yellow ones to explain what’s this for? That was interesting, because if you think your stand-up is to update your scrum master on what you’ve done yesterday, that’s a good chance for somebody else in the team to say, “That’s not really what they’re meant to do. Can we use something else if we don’t need updating?” So the yellow, here’s what this is for. Some reds, like this is confusing or not working or needs to change, and the green Post-Its for, “I like this. This is valuable and still helping us.” You can choose different colors because it’s hard to find the right Post-Its.

But that has sparked some really useful and interesting conversations, especially when you see big clusters of confusion about what we’re meant to be getting out of something. Also, having everything up there at once does make you wonder about efficiencies. Do you have a separate time when you check in on whether your deadlines or your objectives and key results or whatever your measuring progress, is there a separate regular meeting about that? Is that something that should come up naturally as you’re talking at stand-up or some other time? Could that fit in somewhere else, and do you feel you know it? Is there anything that quite often what we don’t flip between is there’s some habit that needs embedded, something that’s new and a bit confusing for us, so we need to talk about it regularly. Can that just turn into an information radiator? Can that be somewhere that would prompt and just have a chat about ad hoc as needed, but we’re still getting the value out of it? I think that helps.

The last bit is probably what do we value and what we put in place for a good reason that’s holding us back? A phrase that I really like is organizational scar tissue. When something goes wrong, we have to protect ourselves against that. It grows over it, and before you know it, this is where the well-meaning origin of all the, if you’ve gotten to change approval boards and stuff with an enormous checklist, and it’s like, “And have you thought of this and have you checked that?” It’s because one time somebody hadn’t checked the DNS settings, so now everyone’s got to write a whole sheet about what they’ve done with DNS. Actually, that was something we didn’t have the eye on one time. We know it now. Can that go, because everything you add will have some impact on something else you can’t do, so talking about what’s valuable and what matters to you.

A nice other bit of focus there on what can we add and what can we do? Your reflex is to keep things, because it keeps us safe when we do things. But keeping in mind things like the accelerate metrics, I’ve always built and talked to teams about that being important. We value being able to release and if we want to release again and again frequently and be confident it’s not right, that’s something we value. Now, you’ve got this fabulous book with thousands of different companies backing up your story. If we value that, what are we doing that harps that? Because anything that slows you down from able to release, being able to roll back confidently from maybe able to just, “I’ve got an idea now that’s out there.” If something’s impacting that, it really has to justify itself and carry its weight, and if it doesn’t, be prepared to take it away or simplify.

Shane Hastie: Removing the friction, challenging those ceremonies, practices, that takes a bit of courage.

The dangers of artificial harmony [12:59]

Neil Vass: Absolutely. I think in lots of teams, you get stuck in that artificial harmony where we’re all too polite to each other, and often, it’s some dedicated person’s role on the team who has designed the current setup. If somebody’s put in these ceremonies, if somebody’s put in this way of working, and I say, “I don’t think it’s valuable anymore,” it’s not me saying, “I don’t think you’re valuable, you do the wrong thing.” So absolutely, probably job number one on your team is to get to a position where it’s fine to talk about anything. If people actually genuinely believe you know where they’re coming from, you value and respect their skills and abilities, you like someone and think they should stay on the team, if that’s all off the table and taken for granted and just understood, you can have much shorter and easier conversations, because, “Yes, then I know all this. Can I talk about changing this, because I don’t think it’s working for us anymore?”

You’re right. I’ve seen lots of teams where bringing it up takes weeks of thinking about, “How could I even mention it,” and awkward dancing around the issue to the point where nobody’s really sure what you’ve even said. Getting to a point where you can just talk to each other is really important on the team.

Shane Hastie: These are, I want to say, skills, competencies that we don’t teach engineers at school.

People skills are vital for success, however they are not taught in engineering programs [14:16]

Neil Vass: I think that’s really true, and I’ve got lots of empathy for it, because I remember learning engineering myself when you’re first at school or early in your career, there’s so much to know just about the tech side. How does the internet work, and how do all the things I talk to as well as the language and frameworks I’m learning. It’s a big job, so I’m definitely not saying you need to learn it all at once, but I think the value of it can make so much of your job easier. If you’ve got any good ideas about getting work done, having a satisfying job, getting people to listen to your ideas and stuff, knowing about this people side is at least as valuable knowing about the tech.

Something else that I like to stress to people is lots of it’s super easy. If you’ve not thought about it before, how can I bring this up delicately? How can I negotiate between us agreements in a team without insulting or upsetting and still get my point across? If you’ve never worked on that before, there’s lots of, “You can just do this and do this.” It’s like a magic trick. Sometimes, you think, “Surely what I need to know is the latest frameworks, the latest hard technology stuff.” That’s good, but these are valued just as much, they’ll help you in your career just as much, so put some focus on them too.

Shane Hastie: What are these people skills? I don’t like to use the term soft skills because they’re often harder to build than some of the technical skills.

“Soft” skills are actually really hard [15:37]

Neil Vass: Absolutely. You can think of it like if you’re doing all the tests, you hate those flaky ones that you might do the same thing multiple times. You keep running it, and sometimes it passes and sometimes it doesn’t. That’s a problem in tech, but that’s how people work, so I can see the challenge of trying to get that right. Some things that have really helped me, there’s a Camille Fournier book about The Manager’s Path that talks really clearly about the progression.

If you were to go from just out of school, beginning programmer all the way up to a CTO or even higher, they talk about what changes at each stage and what you should be thinking about and probably earlier than you think. It’s not like I’m a line manager now or I’m a head of engineering or something, and now I need to know some people stuff. It’s on a team, day-to-day, negotiating with your peers and stuff like that where you can develop it. But there’s some really good chapters in there saying things like, “It’s at this point you realize that what got you here isn’t the skills you need anymore.” That shift as you get more senior is important.

Other things have helped when you’re struggling with something in particular, there’s loads of books and podcasts and topics to look into, so there’s not one manual for how people work. But something that really opened my eyes about what was going on with lots of people is a negotiation book called Getting to Yes, so that’s William Ury and some other people. This gets taught and lots of universities and stuff. I’d never heard of it. What I liked about that was as for negotiating, you want this and I want that, and we often think of that as I’m trying to buy a used car off you. I’d like you to lower the price. Negotiating is all the time. When we’ve got different ideas about how to approach something, how are we going to proceed? So that is pretty much most human interactions.

Even knowing that negotiation was a topic that I should be looking at was a light bulb for me, and what I loved about that was there’s loads in there that I recognized from speaking to people, but I’d learned it as I think this individual needs that, you need to have prepared the ground, you need to have talked through this, you need to show you’ve heard them. Or sometimes, it’s that department, like marketing people think like this. With this book, some of the tricks I’d done already, I realized that’s how people work. “Oh, this is valuable. This is spectacularly useful everywhere,” and there was lots of good things in there that just work in day-to-day conversations.

One thing that stuck out for me as just an example of something I’d never thought about before, and I’ve thought about probably every conversation since, sometimes it feels like if you think we should do one thing, and I don’t think we should do that, it feels like we should use our time together to lay out my case, to talk about what’s going on. But what that often feels like is we’ve got nothing in common. 100% of our talk is about us disagreeing, so this person doesn’t get it. But quite often, both of us agree. You take it for granted and you’ve never considered, both of us want this product to be a success. Both of us think that do not on the web is the right thing, both of us, all kinds of stuff. There’s a huge vast wave of stuff that we agree on, and it’s on this little point we want to see how we can make our shared endeavour work.

The other part of that was sometimes it feels like you don’t want to give somebody’s arguments too much credit, because if I thoroughly understand where you’re coming from and I don’t think that’s the way we should approach it, if I understand it and talk it through your reasoning, does that mean I have to agree with you, and that’s not the truth. I can completely, thoroughly see where you’re coming from and still disagree about what the next step is. More than that, for a lot of people, if I’m saying you’re wrong, what a lot of people think is, “He’s not understood me. He doesn’t see where I’m coming from. I have to explain it again.” Instead of listening to your side of it, they’re thinking about how to reframe their argument and waiting for their turn to go again.

Instead, if I can reframe your argument, repeat it almost better than you can, the benefits and the reasons why it’s valuable, that can often be the first time somebody relaxes and thinks, “This is someone who gets it, and he says we should do something different.” That’s just such an unlock bar for negotiating with people in all kinds of situations.

Shane Hastie: So this people stuff, you’ve got 18 years experience, you’ve moved into that leadership management role. What are some of the key skills or pieces of advice that you would give somebody who’s wanting to step into a technical leadership stance or space?

Advice for new leaders [19:51]

Neil Vass: Oh, that’s a good question. You’re probably doing a lot of good stuff already. If you’re a human navigating the world, like anything I’ve mentioned here, you’ve done versions of already anytime that you’ve not just said okay and got on with a task you’re handed, or anytime you’ve not just been suggesting things, and everyone goes and does it, you’ve had to negotiate what’s going on here. It’s not just that negotiation, it’s the whole how does this team fit together, what skills and abilities do we have or need, and how can we help build them? Thinking about what you’ve done already is useful and what areas have you sort of enjoyed or found success with? That can give you areas to lean on and think about doing more of.

Another important shift, I think, that really helped me to think about leadership is your job on a team is to maximize the output and the capability of that team, so you’re trying to make your team understand what kind of work is valuable, to understand the impact of the work, and to get work out of it. Quite often, I thought of myself as, “It comes in, there’s different cogs here, and my cog is spinning really fast,” and you definitely know that that is not how the overall machine takes in more work and puts out the other side. What can you do, what can you suggest to improve the capability of the whole team? The kind of suggestions that come up with there is if there is somebody who knows lots of tech skills and knows all our systems and how they interact with each other, there’s so much work that will come in that says it’s faster if this person does it. But that’s not true if 90% of our work goes in your queue for that one person.

Suggesting ways we can do that and things we’ve done there before is if work comes in, and this person’s going to be 10 times faster than anybody else, they are not allowed to touch it. Someone else has to pick that up. This person could be there and watch. Do not touch the keyboard until you get to a point where it feels like more and more of a work. It’s not an automatic send it to that person. There’s that and other concrete examples about skills and abilities. If somebody’s not doing this on your team, you can definitely suggest it in any team sessions or take people aside and talk about it and just say, “I’m interested in how do we help level up this whole team?”

That can be on lots of dimensions, so it could be how can we get more skills, more domain knowledge, how can we get more understanding of the impact? Did anything we released make any difference? Sometimes, people think they don’t want to bore you with that, so just being an advocate for bringing that up in what little ways we could sprinkle it in just helps you and the people around you think more about the big picture. Our job here isn’t just to churn through the tickets, it’s to build all our skills, it’s to understand the impact, it’s to choose more sensible things to work on in future so we can do less and still get that business impact to meet user needs.

Shane Hastie: Your job isn’t to identify the requirements, your job is to change the world. Love it. Neil, really interesting stuff. Some good solid advice there. If people want to continue the conversation, when do they find you?

Neil Vass: I’m still mourning the collapse of Twitter. I used to be on there all the time, but it’s been a good few years. I’ve started my own blog at Neil-Vass.com, which is definitely where I’ll write anything that comes into my head about this and many other topics. On there, it’s got a list of LinkedIn and Mastodon and other social sites that I’m trying to do it, so absolutely leaving comments there or getting in touch on any social medias to ask anything or suggest anything that you think I should look into more, I’d love to hear about it.

Shane Hastie: We’ll make sure that link is in the show notes. Neil, thank you so much for taking the time to talk to us today.

Neil Vass: It’s been great. Thanks for having me.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Is MongoDB, Inc. (MDB) a Good SaaS Stock to Buy Right Now? – Insider Monkey

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Insider Trading & Hedge Fund Data, and Investment Newsletter From Insider Monkey

We are unable to find the page you’ve just requested. Please check the link and try again.

You will be automatically re-directed to the Insider Monkey homepage within seconds.

All text and design is copyright ©2024 Insider Monkey. All rights reserved.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Why MongoDB Plunged Over 35% in May – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of database software up-and-comer MongoDB (NASDAQ: MDB) cratered 35.4% in May, according to data from S&P Global Market Intelligence.

MongoDB happened to report first-quarter earnings on May 30, and the subsequent reaction on the final day of the month accounted for a bulk of the decline. While revenue and non-GAAP (adjusted) earnings per share (EPS) beat expectations, management’s forward-looking guidance caused shares to plunge.

A deceleration in growth for a high-multiple stock

MongoDB is a new kind of database company based on something called a “document” architecture, a more flexible kind of format compared with legacy relational databases, and better suited for storing and processing both structured and unstructured data (images, videos, social media posts). Thus, adoption of the company’s software has taken off over the six and a half years since MongoDB went public.

That solid growth appeared intact last quarter, as revenue grew 22% to $450.6 million, while adjusted EPS came in at $0.51 per share, with each metric exceeding expectations.

However, management guided lightly for the current quarter, forecasting $460 million to $464 million, with adjusted EPS of $0.46 to $0.49, below the average analyst forecast of $473 million and $0.58, respectively. The company also took down its full-year estimates, with revenue guidance of $1.88 billion to $1.9 billion, and adjusted EPS of $2.15 to $2.30, down from prior guidance of $1.9 billion to $1.93 billion and $2.27 to $2.49, respectively.

MongoDB also wasn’t exactly cheap heading into earnings, with a price-to-sales ratio in the mid-teens, relative to a P/S ratio of 9.5 today.

When a high-multiple growth stock gives disappointing guidance and lowers prior forecasts, that’s really not a good thing.

In explaining the slowdown, management pretty much blamed both a cautious macroeconomic environment, as well as perhaps acquiring customers in the prior year that had large volumes but not as much consumption growth potential. CEO Dev Ittycheria also intimated that enterprise customers have been more focused on training artificial intelligence (AI) models and experimenting, but haven’t yet developed a lot of AI applications with those models, in which case they would need to purchase more consumption from MongoDB.

A buying opportunity?

While MongoDB was never a cheap stock, it is regarded as the best-in-class name of document databases. And in an AI-powered world, the advantages of MongoDB’s architecture could win the day. Ittycheria did give an optimistic long-term forecast beyond this year, saying:

We also see a tremendous opportunity to win more legacy workloads, as AI has now become a catalyst to modernize these applications. MongoDB’s document-based architecture is particularly well-suited for the variety and scale of data required by AI-powered applications. We are confident MongoDB will be a substantial beneficiary of this next wave of application development.

If that proves true, MongoDB could be a great long-term buy on this dip. It may just be that there’s a pause in its AI-fueled growth potential today, until enterprises train their models and are then ready to develop AI applications. Put MongoDB on your watch list after the drawdown.

Should you invest $1,000 in MongoDB right now?

Before you buy stock in MongoDB, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and MongoDB wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $704,612!*

Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service has more than quadrupled the return of S&P 500 since 2002*.

See the 10 stocks »

*Stock Advisor returns as of June 3, 2024

Billy Duberstein and/or his clients have no position in any of the stocks mentioned. The Motley Fool has positions in and recommends MongoDB. The Motley Fool has a disclosure policy.

Why MongoDB Plunged Over 35% in May was originally published by The Motley Fool

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.