Month: May 2025

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

The new release of LiteRT, formerly known as TensorFlow Lite, introduces a new API to simplify on-device ML inference, enhanced GPU acceleration, support for Qualcomm NPU (Neural Processing Unit) accelerators, and advanced inference features.
One of the goals of the latest LiteRT release is to make it easier for developers to harness GPU and NPU acceleration, which previously required working with specific APIs or vendor-specific SDKs:
By accelerating your AI models on mobile GPUs and NPUs, you can speed up your models by up to 25x compared to CPU while also reducing power consumption by up to 5x.
For GPUs, LiteRT introduces MLDrift, a new GPU acceleration implementation that offers several improvements over TFLite’s GPU delegate. These include more efficient tensor-based data organization, context- and resource-aware smart computation, and optimized data transfer and conversion.
This results in significantly faster performance than CPUs, than previous versions of our TFLite GPU delegate, and even other GPU enabled frameworks particularly for CNN and Transformer models.
LiteRT also targets neural processing units (NPUs), which are AI-specific accelerators designed to speed up inference. According to Google’s internal benchmarks, NPUs can deliver up to 25× faster performance than CPUs while using just one-fifth of the power. However, there is no standard way to integrate these accelerators, often requiring custom SDKs and vendor-specific dependencies.
Thus, to provide a uniform way to develop and deploy models on NPUs, Google has partenered with Qualcomm and MediaTek to add support for their NPUs in LiteRT, enabling acceleration for vision, audio, and NLP models. This includes automatic SDK downloads alongside LiteRT, as well as opt-in model and runtime distribution via Google Play.
Moreover, to make handling GPU and NPU acceleration even easier, Google has streamlined LiteRT’s API to let developers specify the target backend to use when creating a compiled model. This is done with the CompiledModel::Create
method, which supports CPU, XNNPack, GPU, NNAPI (for NPUs), and EdgeTPU backends, significantly simplifying the process compared to previous versions, which required different methods for each backend.
The LiteRT API also introduces features aimed at optimizing inference performance, especially in memory- or processor-constrained environments. These include buffer interoperability via the new TensorBuffer API, which eliminates data copies between GPU memory and CPU memory; and support for asynchronous, concurrent execution of different parts of a model across CPU, GPU, and NPUs, which, according to Google, can reduce latency by up to 2x.
LiteRT can be downloaded from GitHub and includes several sample apps demonstrating how to use it.

MMS • Renato Losio
Article originally posted on InfoQ. Visit InfoQ

Redis 8 has recently hit general availability, switching to the AGPLv3 license. A year after leaving its open source roots to challenge cloud service providers and following the birth of Valkey, Redis has rehired its creator and moved back to an open source license.
Initially released under the more permissive BSD license, Redis switched to the more restrictive and not open source SSPLv1 license in March 2024, sparking concerns in the community and triggering the successful Valkey fork. Just over a year later, the project’s direction has changed again, and Redis 8.0 is once more open source software, this time under the terms of the OSI-approved AGPLv3 license.
According to Redis’ announcement, the new major release delivers several performance improvements, including up to 87% faster commands, up to 2× higher throughput in operations per second, and up to 18% faster replication. It also introduces the beta of Vector Sets, which is discussed separately on InfoQ. Salvatore Sanfilippo (aka ‘antirez’), the creator of Redis, explains:
Five months ago, I rejoined Redis and quickly started to talk with my colleagues about a possible switch to the AGPL license, only to discover that there was already an ongoing discussion, a very old one, too. (…) Writing open source software is too rooted in me: I rarely wrote anything else in my career. I’m too old to start now.
A year ago, the more restrictive license triggered different forks of Redis, including the very successful and CNCF-backed Valkey that gained immediate support from many providers, including AWS and Google Cloud. AWS has launched ElastiCache for Valkey and MemoryDB for Valkey at significant discounts compared to their ElastiCache for Redis version offering.
While noting that Valkey is currently outperforming Redis 8.0 in real-world benchmarks, Khawaja Shams, CEO and co-founder of Momento, welcomes Sanfilippo’s return to Redis and writes:
I am genuinely excited about his return because it is already impactful. He’s following through on his promise of contributing new features and performance optimizations to Redis. More profoundly, Redis 8.0 has been open-sourced again.
While many predict that developers using Valkey will not switch back to Redis, they also acknowledge that Valkey will face tougher competition. Peter Zaitsev, founder of Percona and open source advocate, highlights one of the advantages of Redis:
While a lot has been said about Redis going back to opensource with AGPLv3 License, I think it has been lost it is not same Redis which has been available under BSD license couple of years ago – number of extensions, such as RedisJSON which has not been Open Source since 2018 are now included with Redis under same AGPLv3 license. This looks like an important part of the response against Valkey, which does not have all the same features, as only the “core” Redis BSD code was forked
The article “Redis is now available under the AGPLv3 open source license” confirms that, aside from the new data type (vector sets), the open source project now integrates various Redis Stack technologies, including JSON, Time Series, probabilistic data types, and the Redis Query Engine into core Redis 8 under AGPL.
The new major release and licensing change have sparked popular threads on Reddit, with many practitioners suggesting it’s too late and calling it a sign of a previous bad decision. Some developers believe the project’s greatest asset remains its creator, while Philippe Ombredanne, lead maintainer of AboutCode, takes a more pessimistic view of the future:
Users see through these shenanigans. For Redis, the damage to their user base is likely already done, irreparable and the shattered trust never to be regained.
Redis is not the first project to switch back from SSPLv1 to AGPL following a successful fork and a loss of community support and trust. A year ago, Shay Banon, founder and CEO of Elastic, announced a similar change for Elasticsearch and Kibana, as reported on InfoQ.
Microsoft Announced Edit, New Open-Source Command-Line Text Editor for Windows at Build 2025

MMS • Bruno Couriol
Article originally posted on InfoQ. Visit InfoQ

At its Build 2025 conference, Microsoft announced Edit, a new open-source command-line text editor, to be distributed in the future as part of Windows 11. Edit aims to provide a lightweight native, modern command-line editing experience similar to Nano and Vim.
Microsoft explained developing Edit because 64-bit Windows lacked a default command-line text editor, a gap since the 32-bit MS-DOS Edit. Microsoft opted for a modeless design to be more user-friendly than modal editors like Vim (see Stackoverflow’s Helping One Million Developers Exit Vim) and built its own tool after finding existing modeless options either unsuitable for bundling or lacking Windows support.
Microsoft positions Edit as a simple editor for simple needs. Features include mouse support, the ability to open multiple files and switch between them, find and replace capabilities (including regex), and word wrap. The user interface features a modern interface and input controls similar to Visual Studio Code. There is however no right-click menu in the app.
Written in Rust, the editor stands small, at less than 250KB in size.
Discussions among developers on platforms like Reddit and Hacker News show varied reactions. Many commenters debated the necessity of a new CLI editor on Windows, questioning its use case given existing options. Some feel it’s redundant for those already using WSL with Nano or Vim or other tools like Git Bash, while others see it as potentially useful for quick, basic edits in a native Windows context without needing third-party installs or WSL.
Edit’s main contributor chimed in with a detailed rationale behind the in-house development:
We decided against nano, kilo, micro, yori, and others for various reasons. What we wanted was a small binary so we can ship it with all variants of Windows without extra justifications for the added binary size. It also needed to have decent Unicode support. It should’ve also been one built around VT output as opposed to Console APIs to allow for seamless integration with SSH. Lastly, first-class support for Windows was obviously also quite important. I think out of the listed editors, micro was probably the one we wanted to use the most, but… it’s just too large.
Microsoft has released Edit’s source code under the MIT license. Edit is not currently available in the stable channel of Windows 11. However, users can download Microsoft Edit from the project’s GitHub page.

MMS • Tom Akehurst
Article originally posted on InfoQ. Visit InfoQ

Key Takeaways
- Mocking gRPC services allows you to validate gRPC integration code during your tests while avoiding common pitfalls such as unreliable sandboxes, version mismatches, and complex test data setup requirements.
- The WireMock gRPC extension supports familiar HTTP stubbing patterns to be used with gRPC-based integration points.
- WireMock’s Spring Boot integration enables dynamic port allocation and automatic configuration injection, eliminating the need for fixed ports and enabling parallel test execution while making test infrastructure more maintainable and scalable.
- API mocking can accelerate and simplify many testing activities, but complex systems contain failure modes that can only realistically be discovered during integration testing, meaning this is still an essential practice.
- While basic unidirectional streaming methods can be mocked, more work is still needed for WireMock to support advanced testing patterns.
When your code depends on numerous external services, end-to-end testing is often prohibitively slow and complex due to issues such as unavailable sandboxes, unstable environments, and difficulty setting up necessary test data. Additionally, running many hundreds or thousands of automated tests against sandbox API implementations can result in very long build/pipeline run times.
Mock-based testing offers a practical alternative that balances comprehensive testing with execution speed. Plenty of tools exist that support mocking of REST APIs, but gRPC is much less widely supported despite its growing popularity, and much less has been written about this problem. In this article, we’ll show you how you can use familiar JVM tools – Spring Boot, gRPC, and WireMock together. We’ll also discuss some of the tradeoffs and potential gotchas to consider when testing against mocked gRPC APIs.
Mocks, integration testing and tradeoffs
API mocking involves a tradeoff – a faster, more deterministic and cheaper-to-create API implementation at the expense of some realism. This works because many types of tests (and thus risks managed) do not rely on nuances of API behaviour we’ve chosen not to model in a mock, so we get the benefits without losing out on any vital feedback.
However, some types of defects will only surface as the result of complex interactions between system components that aren’t captured in mock behaviour, so some integration testing is still necessary in a well-rounded test strategy.
The good news is that this can be a small fraction of the overall test suite, provided that tests are created with an explicit judgement about whether they’re expected to reveal the type of complex integration issue mentioned above.
The tools we’ll be using
First, let’s define the terms we’ll be using – feel free to skip down to the next section if you’re already familiar!
gRPC is a modern network protocol built on HTTP/2 and leveraging Protocol Buffers (Protobuf) for serialization. It is often used in microservice architectures where network efficiency and low latency are important.
Spring Boot is a framework within the Spring ecosystem that simplifies Java application development through convention-over-configuration principles and intelligent defaults. It enables developers to focus on business logic while maintaining the flexibility to customize when needed.
WireMock is an open source API simulation tool that enables developers to create reliable, deterministic test environments by providing configurable mock implementations of external service dependencies.
Why mock gRPC in Spring Boot?
While building an app or service that calls out to gRPC services, you need there to be something handling those calls during development. There are a few options here:
- Call out to a real service implementation sandbox.
- Mock the interface that defines the client using an object mocking tool like Mockito.
- Mock the gRPC service locally and configure your app to connect to this during dev and test.
The first option can often be impractical for a number of reasons:
- The sandbox is slow or unreliable.
- It’s running the wrong version of the service.
- Setting up the correct test data is difficult.
- The service isn’t even built yet.
The second option avoids the above issues but significantly reduces the effectiveness of your tests due to the fact that not all of the gRPC-related code gets executed. gRPC is a complex protocol with a number of failure modes, and ideally, we want to discover these in our automated tests rather than in staging, or even worse, production.
So this leaves mocking the gRPC service over the wire, which brings the advantages of executing the gRPC integration code during your tests while avoiding the pitfalls of relying on an external sandbox. This is what we’ll explore in the remainder of this article.
Where it gets tricky
Setting up mock services for testing often introduces configuration complexity that can undermine the benefits of the mocking approach itself. A particular pain point arises from the practice of using fixed port numbers for mock servers in order to be able to configure the local environment to address them. Having fixed ports prevents parallelization of the tests, making it difficult to scale out test runners as the test suite grows. Fixed ports can also cause errors in some shared tenancy CI/CD environments.
WireMock’s Spring Boot integration addresses these challenges through dynamic port allocation and configuration injection. By automatically assigning available ports at runtime and seamlessly injecting these values into the application context, it eliminates the need for manual port management while enabling parallel test execution. This functionality, combined with a declarative annotation-based configuration system, significantly reduces setup complexity and makes the testing infrastructure more maintainable.
The second problem is that while API mocking tools have traditionally served REST and SOAP protocols well, gRPC support has remained notably scarce. To close this gap, we’ll use the newly released WireMock gRPC extension. The extension is designed to bring the same powerful mocking capabilities to gRPC-based architectures while preserving the familiar stubbing patterns that developers use for traditional HTTP testing.
Putting it all together
The following is a step-by-step guide to building a Spring Boot app that calls a simple gRPC “echo” service and returns the message contained in the request. If you want to check out and run some working code right away, all the examples in this article are taken from this demo project.
We’ll use Gradle as our build tool, but this can quite easily be adapted to Maven if necessary.
Step 1: Create an empty Spring Boot app
Let’s start with a standard Spring Boot application structure. We’ll use Spring Initializr to scaffold our project with the essential dependencies:
plugins {
id 'org.springframework.boot' version '3.2.0'
id 'io.spring.dependency.management' version '1.1.4'
id 'java'
}
group = 'org.wiremock.demo'
version = '0.0.1-SNAPSHOT'
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
Step 2: Add gRPC dependencies and build tasks
Since gRPC requires service interfaces to be fully defined in order to work, we have to do a few things at build time:
● Generate Java stubs, which will be used when setting up stubs in WireMock.
● Generate a descriptor (.dsc) file, which will be used by WireMock when serving mocked responses.
Add the gRPC starter module
implementation
'org.springframework.grpc:spring-grpc-spring-boot-starter:0.2.0'
Add the Google Protobuf libraries and Gradle plugin
plugins {
id "com.google.protobuf" version "0.9.4"
}
In the dependencies section:
protobuf 'com.google.protobuf:protobuf-java:3.18.1'
protobuf 'io.grpc:protoc-gen-grpc-java:1.42.1'
Ensure we’re generating both the descriptor and Java sources:
protobuf {
protoc {
artifact = "com.google.protobuf:protoc:3.24.3"
}
plugins {
grpc {
artifact = "io.grpc:protoc-gen-grpc-java:$versions.grpc"
}
}
generateProtoTasks {
all()*.plugins {
grpc {
outputSubDir = 'java'
}
}
all().each { task ->
task.generateDescriptorSet = true
task.descriptorSetOptions.includeImports = true
task.descriptorSetOptions.path = "$projectDir/src/test/resources/grpc/services.dsc"
}
}
}
Add a src/main/proto folder and add a protobuf service description file in there:
package org.wiremock.grpc;
message EchoRequest {
string message = 1;
}
message EchoResponse {
string message = 1;
}
service EchoService {
rpc echo(EchoRequest) returns (EchoResponse);
}
After adding this, we can generate the Java sources and descriptor:
./gradlew generateProto generateTestProto
You should now see a file called services.dsc
under src/test/resources/grpc
and some generated sources under build/generated/source/proto/main/java/org/wiremock/grpc
.
Step 3: Configure application components for gRPC integration
Now that we have Java stubs generated, we can write some code that depends on them.
We’ll start by creating a REST controller that takes a GET to /test-echo and calls out to the echo gRPC service:
@RestController
public class MessageController {
@Autowired
EchoServiceGrpc.EchoServiceBlockingStub echo;
@GetMapping("/test-echo")
public String getEchoedMessage(@RequestParam String message) {
final EchoResponse response = echo.echo(EchoServiceOuterClass.EchoRequest.newBuilder().setMessage(message).build());
return response.getMessage();
}
}
Then we’ll configure a Spring Boot application that initialises the gRPC service bean (that we need so it can be injected into the REST controller):
@SpringBootApplication
public class SpringbootGrpcApplication {
@Bean
EchoServiceGrpc.EchoServiceBlockingStub echo(GrpcChannelFactory channels) {
return EchoServiceGrpc.newBlockingStub(channels.createChannel("local").build());
}
public static void main(String[] args) {
SpringApplication.run(SpringbootGrpcApplication.class, args);
}
}
Step 4: Set up an integration test class
Now we can start to build some tests that will rely on gRPC mocking.
First, we need to configure a few things in the test class.
@SpringBootTest(
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
classes = SpringbootGrpcApplication.class
)
@EnableWireMock({
@ConfigureWireMock(
name = "greeting-service",
portProperties = "greeting-service.port",
extensionFactories = { Jetty12GrpcExtensionFactory.class }
)
})
class SpringbootGrpcApplicationTests {
@InjectWireMock("greeting-service")
WireMockServer echoWireMockInstance;
WireMockGrpcService mockEchoService;
@LocalServerPort
int serverPort;
RestClient client;
@BeforeEach
void init() {
mockEchoService = new WireMockGrpcService(
new WireMock(echoWireMockInstance),
EchoServiceGrpc.SERVICE_NAME
);
client = RestClient.create();
}
}
There’s quite a bit happening here:
- The
@SpringBootTest
annotation starts our Spring application, and we’ve configured it to use a random port (which we need if we’re going to ultimately parallelize our test suite). @EnableWireMock
adds the WireMock integration to the test, with a single instance defined by the@ConfigureWireMock
nested annotation.- The instance is configured to use
Jetty12GrpcExtensionFactory
, which is the gRPC WireMock extension. - @InjectWireMock injects the WireMockServer instance into the test.
- The
WireMockGrpcService
, which is instantiated in theinit()
method, is the gRPC stubbing DSL. It wraps the WireMock instance, taking the name of the gRPC service to be mocked. - We’ll use the
RestClient
to make test requests to the Spring app, using the port number indicated by@LocalServerPort
.
Now we’re ready to write a test. We’ll configure the gRPC echo service to return a simple, canned response and then make a request to the app’s REST interface and check that the value is passed through as we’d expect:
@Test
void returns_message_from_grpc_service() {
mockEchoService.stubFor(
method("echo")
.willReturn(message(
EchoResponse.newBuilder()
.setMessage("Hi Tom")
)));
String url = "http://localhost:" + serverPort + "/test-echo?message=blah";
String result = client.get()
.uri(url)
.retrieve()
.body(String.class);
assertThat(result, is("Hi Tom"));
}
The interesting part here is the first statement in the test method. This is essentially saying “when the echo gRPC method is called, return an EchoResponse with a message value of ‘Hi Tom’”.
Note how we’re taking advantage of the Java model code generated by the protoc tool (via the Gradle build). WireMock’s gRPC DSL takes a generated model and associated builder objects, which gives you a type-safe way of expressing response bodies and expected request bodies.
The benefit of this is that we a) get IDE auto-completion when we’re building body objects in our stubbing code, and b) if the model changes (due to a change in the .proto files), it will immediately get flagged by the compiler.
Step 5: Add dynamic responses
There are some cases where working with generated model classes can be overly constraining, so WireMock also supports working with request and response bodies as JSON.
For instance, suppose we want to echo the message sent in the request to the response rather than just returning a canned value. We can do this by templating a JSON string as the response body, where the structure of the JSON matches the response body type defined in the protobuf file:
@Test
void returns_dynamic_message_from_grpc_service() {
mockEchoService.stubFor(
method("echo").willReturn(
jsonTemplate(
"{"message":"{{jsonPath request.body '$.message'}}"}"
)));
String url = "http://localhost:" + serverPort + "/test-echo?" +
"message=my-messsage";
String result = client.get()
.uri(url)
.retrieve()
.body(String.class);
assertThat(result, is("my-messsage"));
}
We can also use all of WireMock’s built-in matchers against the JSON representation of the request:
mockEchoService.stubFor(
method("echo").withRequestMessage(
matchingJsonPath("$.message", equalTo("match-this"))
)
.willReturn(
message(EchoResponse.newBuilder().setMessage("matched!"))
));
Current limitations
At present, WireMock’s gRPC extension has quite limited support for unidirectional streaming, allowing only a single request event to trigger a stub match, and only a single event to be returned in a streamed response. Bidirectional streaming methods are currently not supported at all.
In both cases, these are due to limitations in WireMock’s underlying model for requests and responses, and the intention is to address this in the upcoming 4.x release of the core library.
Of course, this is all open source, so any contributions to this effort would be very welcome!
Additionally, only a limited range of standard Protobuf features have been tested with the extension, and occasionally, incompatibilities are discovered, for which issues and PRs are also gratefully received.
And we’re done!
If you’ve made it this far, you hopefully now have a good idea of how gRPC mocking can be used to support your Spring Boot integration tests.
Note that while this is the approach we recommend, there is always a tradeoff involved: mocking cannot capture all real-world failure modes. We would recommend using tools such as contract testing or ongoing validation against the real API to increase the reliability of your tests.
There’s more that we haven’t shown here, including errors, faults, verifications and hot reload. The gRPC extension’s documentation and tests are great places to look if you want to learn more.
For more information about the WireMock Spring Boot integration, see this page.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
The Ampere Porting Advisor is a fork of the Porting Advisor for Graviton, an open source project from AWS, which, in turn, is a fork of the Arm High Performance Computing group’s Porting advisor.
Originally, it was coded as a Python module that analyzed known incompatibilities for C and Fortran code. This tutorial walks you through building and using the tool and how to act on issues identified by the tool.
The Ampere Porting Advisor is a command line tool that analyzes source code for known code patterns and dependency libraries. It then generates a report with any incompatibilities with Ampere’s processors. This tool provides suggestions of minimal required and/or recommended versions to run on Ampere processors for both language runtime and dependency libraries.
It can be run on non-Arm64 based machines (like Intel and AMD) and Ampere processors are not required. This tool does not work on binaries, only source code. It does not make any code modifications, it doesn’t make API level recommendations, nor does it send data back to Ampere.
Please Note: Even though we do our best to find known incompatibilities, we still recommend to perform the appropriate tests to your application on a system based on Ampere processors before going to production.
This tool scans all files in a source tree, regardless of whether they are included by the build system or not. As such, it may erroneously report issues in files that appear in the source tree but are excluded by the build system. Currently, the tool supports the following languages/dependencies:
Python 3+
- Python version
- PIP version
- Dependency versions in requirements.txt file
Java 8+
- Java version
- Dependency versions in pom.xml file
- JAR scanning for native method calls (requires JAVA to be installed)
Go 1.11+
- Go version
- Dependency versions on go.mod file
C, C++, Fortran
- Inline assembly with no corresponding aarch64 inline assembly.
- Assembly source files with no corresponding aarch64 assembly source files.
- Missing aarch64 architecture detection in autoconf config.guess scripts.
- Linking against libraries that are not available on the aarch64 architecture.
- Use of architecture specific intrinsic.
- Preprocessor errors that trigger when compiling on aarch64.
- Use of old Visual C++ runtime (Windows specific).
- The following types of issues are detected, but not reported by default:
- Compiler specific code guarded by compiler specific pre-defined macros.
- The following types of cross-compile specific issues are detected, but not reported by default.
- Architecture detection that depends on the host rather than the target.
- Use of build artifacts in the build process.
For more information on how to modify issues reported, use the tool’s built-in help: ./porting-advisor-linux-x86_64 -–help
If you run into any issues, see our CONTRIBUTING file in the project’s GitHub repository.
Running the Ampere Porting Advisor as a Container
By using this option, you don’t need to worry about Python or Java versions, or any other dependency that the tool needs. This is the quickest way to get started.
Pr-requisites
- Docker or containerd + nerdctl + buildkit
Run Container Image
After building the image, we can run the tool as a container. We use -v to mount a volume from our host machine to the container.
We can run it directly to console:
docker run --rm -v my/repo/path:/repo porting-advisor /repo
Or generate a report:
docker run --rm -v my/repo/path:/repo -v my/output:/output porting-advisor /repo --output /output/report.html
Windows example:
docker run --rm -v /c/Users/myuser/repo:/repo -v /c/Users/myuser/output:/output porting-advisor /repo --output /output/report.html
Running the Ampere Porting Advisor as a Python Script
Pr-requisites
- Python 3.10 or above (with PIP3 and venv module installed).
- (Optionally) Open JDK 17 (or above) and Maven 3.5 (or above) if you want to scan JAR files for native methods.
- Unzip and jq is required to run test cases.
Enable Python Environment
Linux/Mac:
python3 -m venv .venv source .venv/bin/activate
Powershell:
python -m venv .venv ..venvScriptsActivate.ps1
Install requirements
- pip3 install -r requirements.txt
Run tool (console output)
- python3 src/porting-advisor.py ~/my/path/to/my/repo
Run tool (HTML report)
- python3 src/porting-advisor.py ~/my/path/to/my/repo –output report.html
Running the Ampere Porting Advisor as a Binary
Generating the Binary
Pre-requisites
- Python 3.10 or above (with PIP3 and venv module installed).
- (Optionally) Open JDK 17 (or above) and Maven 3.5 (or above) if you want the binary to be able to scan JAR files for native methods.
The build.sh script will generate a self-contained binary (for Linux/MacOS). It will be output to a folder called dist.
By default, it will generate a binary named like porting-advisor-linux-x86_64. You can customize generated filename by setting environment variable FILE_NAME.
./build.sh
For Windows, the Build.ps1 will generate a folder with an EXE and all the files it requires to run.
.Build.ps1
Running the Binary
Pre-requisites
Once you have the binary generated, it will only require Java 11 Runtime (or above) if you want to scan JAR files for native methods. Otherwise, the file is self-contained and doesn’t need Python to run.
Default behavior, console output:
$ ./porting-advisor-linux-x86_64 ~/my/path/to/my/repo
Generating HTML report:
$ ./porting-advisor-linux-x86_64 ~/my/path/to/my/repo --output report.html
Generating a report of just dependencies (this creates an Excel file with just the dependencies we found on the repo, no suggestions provided):
$ ./porting-advisor-linux-x86_64 ~/my/path/to/my/repo --output dependencies.xlsx --output-format dependencies
Understanding an Ampere Porting Advisor Report
Here is an example of the output report generated with a sample project:
./dist/porting-advisor-linux-x86_64 ./sample-projects/
| Elapsed Time: 0:00:03
Porting Advisor for Ampere Processor v1.0.0
Report date: 2023-05-10 11:31:52
13 files scanned.
detected go code. min version 1.16 is required. version 1.18 or above is recommended. we detected that you have version 1.19. see https://github.com/AmpereComputing/ampere-porting-advisor/blob/main/doc/golang.md for more details.
detected python code. if you need pip, version 19.3 or above is recommended. we detected that you have version 22.3.1
detected python code. min version 3.7.5 is required. we detected that you have version 3.10.9. see https://github.com/AmpereComputing/ampere-porting-advisor/blob/main/doc/python.md for more details.
./sample-projects/java-samples/pom.xml: dependency library: leveldbjni-all is not supported on Ampere processor.
./sample-projects/java-samples/pom.xml: using dependency library snappy-java version 1.1.3. upgrade to at least version 1.1.4
./sample-projects/java-samples/pom.xml: using dependency library zstd-jni version 1.1.0. upgrade to at least version 1.2.0
./sample-projects/python-samples/incompatible/requirements.txt:3: using dependency library OpenBLAS version 0.3.16. upgrade to at least version 0.3.17
detected go code. min version 1.16 is required. version 1.18 or above is recommended. we detected that you have version 1.19. see https://github.com/AmpereComputing/ampere-porting-advisor/blob/main/doc/golang.md for more details.
./sample-projects/java-samples/pom.xml: using dependency library hadoop-lzo. this library requires a manual build more info at: https://github.com/AmpereComputing/ampere-porting-advisor/blob/main/doc/java.md#building-jar-libraries-manually
./sample-projects/python-samples/incompatible/requirements.txt:5: dependency library NumPy is present. min version 1.19.0 is required.
detected java code. min version 8 is required. version 17 or above is recommended. see https://github.com/AmpereComputing/ampere-porting-advisor/blob/main/doc/java.md for more details.
Use --output FILENAME.html to generate an HTML report.
- In the report, we see several language runtimes (Python, pip, golang, Java) and their versions detected. All these messages communicate the minimum version and recommended version for these languages. Some of these lines detect that prerequisite versions have been found and are purely informative.
- We also see some messages from the dependencies detected in the Project Object Model (POM) or a Java project. These are dependencies that will be downloaded and used as part of a Maven build process, and we see three types of actionable messages:
Dependency requires more recent version
./sample-projects/java-samples/pom.xml: using dependency library snappy-java version 1.1.3. upgrade to at least version 1.1.4
- Messages of this type indicate that we should use a more recent version of the dependency, which will require rebuilding and validation of the project before continuing .
Dependency requires a manual build
./sample-projects/java-samples/pom.xml: using dependency library hadoop-lzo. this library requires a manual build more info at: https://github.com/AmpereComputing/ampere-porting-advisor/blob/main/doc/java.md#building-jar-libraries-manually
- In this case, a dependency does support the architecture, but for some reason (perhaps to test hardware features available and build an optimized version of the project for the target platform) the project must be manually rebuilt rather than relying on a pre-existing binary artifact
Dependency is not available on this architecture
./sample-projects/java-samples/pom.xml: dependency library: leveldbjni-all is not supported on Ampere processor.
- In this case, the project is specified as a dependency but is not available for the Ampere platform. An engineer may have to examine what is involved in making the code from the dependency compile correctly on the target platform. This process can be simple but may also take considerable time and effort. Alternatively, you can adapt your project to use an alternative package providing similar functionality which does support the Ampere architecture and modify your project’s code appropriately to use this alternative.
A Transition Example for C/C++
MEGAHIT is an NGS assembler tool available as a binary for x86_64. A customer wanted to run MEGAHIT on Arm64 as part of an architecture transition. But the compilation failed on Arm64 in the first file:

The developer wanted to know what needed to be changed to make MEGAHIT compile correctly on Arm64.
In this case, Ampere Porting Advisor (APA) can play a key role. After scanning the source repository of the MEGAHIT project with APA, we get a list of issues that need to be checked before rebuilding MEGAHIT on Arm64:

Let’s investigate each error type in the list and correct them for Arm64 if necessary.
Architecture-specific build options

These errors will be triggered once APA detected build options not valid on Arm64.
The original CMakeList.txt is using x86_64 compile flags by default without checking CPU Architectures. To fix this, we can test a CMAKE_SYSTEM_PROCESSOR condition to make sure the flags reported by APA will be only applied to x86_64 architectures.
Architecture specific instructions

The architecture specific instructions error will be triggered once APA detected non-Arm64 C-style functions being used in the code. Intrinsic instructions are compiled by the compiler directly into platform-specific assembly code, and typically each platform will have their own set of intrinsics and assembly code instructions optimized for that platform.
In this case, we can make the use of pre-processor conditionals to only compile the _pdep_u32/64 and __cpuid/ex instructions when #if defined(x86_64) is true for the HasPopcnt() and HasBmi2() functions. For vec_vsx_ld, it is already wrapped in a pre-processor conditional, and will only be compiled on Power PC architecture, so we can leave it as is.
Architecture specific inline assembly

The architecture specific instructions error will be triggered once APA detected assembly code being used in the code. We need to check whether the snippet of assembly code is for Arm64 or not.
The MEGAHIT project only uses the bswap assembly code in phmap_bits.h when it is being compiled on the x86_64 architecture. When being compiled on other architectures, it compiles a fall-back implementation from glibc. So no changes are required in phmap_bits.h.
In cpu_dispatch.h,two inline functions HasPopcnt() and HasBmi2() unconditionally include the x86_64 assembly instruction cpuid to test for CPU features on x86_64. We can add a precompiler conditional flag #if defined(x86_64) to make sure this code is not called on Arm64, and we will always return false.
Architecture specific SIMD intrinsic

The architecture specific instructions error will be triggered once APA detected x86_64 SIMD instructions like AVX256 or AVX512 being used in the code. These SIMD instructions are wrapped by precompiler conditional flags and will usually not cause any functionality issue on Arm64.
If there were no SIMD implementation of the algorithm for Arm64, there could be a performance gap compared to x86_64. In this case, there is a NEON SIMD implementation for Arm64 in xxh3.h and this implementation will be cherry picked by the compiler based on the CPU architecture. No further actions need to be taken.
Preprocessor error on AArch64
The preprocessor error will be triggered by APA to indicate that the Arm64 architecture may not be included in a pre-compile stage. In this case, we can see that the pre-compile conditional is for x86_64 only and does not concern the Arm64 architecture.
Rebuild and test
Once all these adjustments have been made, we could rebuild the project:

The project compiled successfully. We then checked whether it passed the project’s test suite:

After we have manually checked and fixed all the potential pitfalls reported by APA, MEGAHIT is now able to build and run on Ampere processors.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts

A prominent NoSQL database, MongoDB supports today’s applications with its extensible, scalable schema. For starters, willing to learn MongoDB in 2025, YouTube provides free top-class tutorials explaining even the most intricate concepts in a step-by-step manner. FreeCodeCamp, The Net Ninja, and Traversy Media are some channels providing easy-to-grasp tutorials on CRUD operations, schemas, and MongoDB Atlas, so that anyone can learn database skills easily.
Here, the best YouTube channels to learn MongoDB from scratch are listed with highlight on their hands-on lessons, beginner-friendly step-by-step guides, and real-world projects for future developers.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Captrust Financial Advisors reduced its position in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 31.1% during the 4th quarter, according to the company in its most recent 13F filing with the Securities & Exchange Commission. The firm owned 995 shares of the company’s stock after selling 450 shares during the quarter. Captrust Financial Advisors’ holdings in MongoDB were worth $232,000 at the end of the most recent quarter.
Several other hedge funds have also recently bought and sold shares of MDB. Norges Bank bought a new position in MongoDB in the 4th quarter valued at about $189,584,000. Marshall Wace LLP bought a new stake in shares of MongoDB during the 4th quarter valued at approximately $110,356,000. Raymond James Financial Inc. bought a new stake in shares of MongoDB during the 4th quarter valued at approximately $90,478,000. D1 Capital Partners L.P. bought a new stake in shares of MongoDB during the 4th quarter valued at approximately $76,129,000. Finally, Amundi grew its holdings in shares of MongoDB by 86.2% during the 4th quarter. Amundi now owns 693,740 shares of the company’s stock valued at $172,519,000 after purchasing an additional 321,186 shares during the last quarter. 89.29% of the stock is owned by institutional investors.
Insider Activity at MongoDB
In other MongoDB news, Director Dwight A. Merriman sold 3,000 shares of the business’s stock in a transaction on Monday, March 3rd. The shares were sold at an average price of $270.63, for a total transaction of $811,890.00. Following the completion of the sale, the director now owns 1,109,006 shares in the company, valued at approximately $300,130,293.78. The trade was a 0.27% decrease in their position. The sale was disclosed in a document filed with the SEC, which can be accessed through the SEC website. Also, CAO Thomas Bull sold 301 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total transaction of $52,148.25. Following the sale, the chief accounting officer now owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This trade represents a 2.02% decrease in their position. The disclosure for this sale can be found here. Insiders have sold 33,538 shares of company stock worth $6,889,905 in the last quarter. Company insiders own 3.60% of the company’s stock.
MongoDB Stock Down 2.1%
MongoDB stock opened at $185.01 on Thursday. MongoDB, Inc. has a 12-month low of $140.78 and a 12-month high of $379.06. The firm’s fifty day moving average is $174.84 and its 200 day moving average is $237.45. The company has a market capitalization of $15.02 billion, a PE ratio of -67.52 and a beta of 1.49.
MongoDB (NASDAQ:MDB – Get Free Report) last released its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $548.40 million for the quarter, compared to analyst estimates of $519.65 million. During the same quarter in the previous year, the firm earned $0.86 EPS. Sell-side analysts expect that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.
Analysts Set New Price Targets
Several equities research analysts recently issued reports on MDB shares. Piper Sandler decreased their price target on shares of MongoDB from $280.00 to $200.00 and set an “overweight” rating on the stock in a report on Wednesday, April 23rd. Monness Crespi & Hardt raised shares of MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, March 3rd. KeyCorp lowered shares of MongoDB from a “strong-buy” rating to a “hold” rating in a report on Wednesday, March 5th. Daiwa America raised shares of MongoDB to a “strong-buy” rating in a report on Tuesday, April 1st. Finally, Redburn Atlantic raised shares of MongoDB from a “sell” rating to a “neutral” rating and set a $170.00 price target on the stock in a report on Thursday, April 17th. Nine equities research analysts have rated the stock with a hold rating, twenty-three have issued a buy rating and one has assigned a strong buy rating to the company. According to data from MarketBeat.com, the company presently has a consensus rating of “Moderate Buy” and a consensus price target of $288.91.
Get Our Latest Stock Report on MDB
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Dan Chao
Article originally posted on InfoQ. Visit InfoQ

Transcript
Chao: I want to introduce Pkl, and introduce it in high level terms. I want to spend most of the time in the editor and showing you what it’s like to use Pkl.
I’m one of the core maintainers of Pkl at Apple. It’s a pretty small team. There’s three of us maintaining Pkl, but it’s a great group. I think we’ve done some pretty awesome things throughout the years. I am an ex-musician. I went into tech by meandering through the musician lifestyle and getting a degree in music performance, and then suddenly realizing, I really like programming.
Then I got really obsessed with it and been doing it ever since. As far as tech, I’ve been all over the place. I’ve spent a bunch of my time doing services. I’ve also done apps. I’ve done iOS and Android. I’ve been a DevOps engineer. I’ve worn every single hat that you can wear, including being a designer for some reason. One of the things that I’ve done in the past that was also language related is I used to design children-oriented programming languages, which is not object-oriented. It’s not functional. It’s this stuff. It’s something meant for a kid between 5 to 8 years old to 13 years old to learn how to program.
Evolution of Static Config
This is the Google Trends result for the search term, infrastructure as code. On the vertical axis, what you’ll see is plotting the frequency of the search term, where higher on the plot is more frequent. Then, on the horizontal axis is the search term over time. What this is telling us is infrastructure as code is how the industry as a whole is coalescing for how to manage infrastructure. Infrastructure as code generally looks something like this.
Usually, the code is YAML. You write a bunch of YAML, and then you check it in into GitHub or some other source code repositories, SVN system. You check that in, and then you provide that. You apply it to some engine somewhere that takes this, and then takes what you’re declaring, and then, you get infrastructure. This is a made-up example. I think if you’re familiar with infrastructure as code, this might look similar to the things out there. Here, we’re declaring, I want two machines. I want a machine in the us-west region. I want a machine in the us-east region, and here’s all the properties about the machine that I want. Here’s the environment variables that I want. I want this many CPUs, this much memory. Here’s how to configure the healthcheck.
This has been amazing. This is something that has allowed us to really improve the velocity of changing infrastructure over time. It’s simplified deployments quite a lot. One of the things that comes with this is, eventually, your configuration will grow in complexity. It can grow in complexity in two ways. It can scale in complexity of logic. It also can grow in complexity of how much configuration that you’re describing. We’ve already started down this path. We have this format, it’s YAML. We have this problem that we need to somehow scale our system.
Often, what you see is something that looks like this. We have our engine. How do we solve this? We have YAML. Let’s go ahead and go with YAML. Let’s come up with a new property called $$imports, and this accepts some relative path, env/prod.yml. We’ll just look for that file with that path, and we’ll load it in, and then we’ll have our custom rule for how to merge that into our config. Then we can follow this line of thinking.
As more requirements come up, we can add more to our system. Maybe we need to create some parameters for our system. We’ll have $$params. Then this also accepts some relative path, and this is how you create different machines based off of different things. Then maybe we need to differentiate the number of CPUs, depending on whether we’re deploying to production or some other environment. We could keep following this path and create more rules entirely within YAML. The thing about this is, if you take a step back, you start to realize this is actually a programming language. We’ve just invented a new language for our system. It just happens to be one that’s hard to understand, hard to write, and it’s bespoke to this one system.
I want to propose that this is driving in the wrong direction. As you evolve down this path, you create things that are ad hoc. My product describes complexity this way, but another competitor might come out with their own way to manage complexity. As a team that uses both products, they’ll need to context switch between one thing and another. This causes a lot of mistakes to be made easily. For example, as far as the language is concerned, you’ve invented a new language, but the core underlying language is still YAML, and YAML doesn’t know anything about what you’re doing here.
One of my examples is, there was this interpolation syntax with these brackets. As far as the editor is concerned, if you’re in VS Code, you’re in IntelliJ, it’s just a string. It’s also hard because I need to context switch how I manage the complexity from one thing to another. Another thing that’s hard is, as I describe my configuration, I want that to be valid. Because we’re describing data, that data needs to be valid. If I say that the CPUs that I’m deploying is 10 million, then we have a problem.
What Is Pkl?
There’s a great quote from Brian Goetz, who is one of the Java architects over at Oracle. He tweeted this saying, every declarative language eventually becomes a terrible programming language, just without the aid of actual language design.
Just to clarify, this is talking about static declarative languages. We want to flip this around. What if instead of starting with YAML, or JSON, or some ad hoc format, what if we start with a language? What if we build this language with solid principles that let you catch errors, that lets you build abstractions easily, and then you use that same language to describe the data? We found that this works really well. That’s what Pkl is. Pkl is two things. It’s both a programming language and a configuration language. You’ll see what I mean later when I go into my demo. It’s a programming language in that you have all the same facilities that you have in a typical language. You have functions, type annotations, you have imports. It’s a configuration language because that’s what it’s meant to do. It doesn’t have things that are scoped beyond configuration. For example, it doesn’t have an event loop, so you can’t have async/await APIs, or something like that. That’s the overview of Pkl.
Demo
Then, I think the best way to learn about a new language is to see somebody writing it. That’s what I’m going to do. This is going to be a live demo. I’m switching over to IntelliJ. Here, I have an empty file. Before I get into this, we have IntelliJ, but we also have an LSP. We also have plugins for VS Code, for Neovim. Then, if you have some other editor that you like, if it supports the LSP, then you can get Pkl support in there as well. Going back to this. This is an empty Pkl file. This is tour.pkl. I have the Pkl plugin installed, and this is an empty object. You can think of Pkl files, if you use the JSON analogy, as an object that has an implicit open curly brace, an implicit closing curly brace, and then you declare properties inside.
For example, I can say foo = 1. Then, I have my shell open right here, and I can go ahead and eval tour.pkl, and I’m going to produce this in JSON. Pkl is a language that evals into a static format. That’s just one of the things, but it does more than that, but I’ll get to that. I can have this output in XML. I can have this produce YAML and plists. Then you can also extend Pkl to produce other formats too. We have foo = 1. I can say bar = foo. Foo is not just a property. It’s also something that could be referenced. Foo is 1, bar is foo. Let’s go back to JSON. Now we have foo and bar that are both the same thing.
Then, you can have nested objects. Here’s a nested thing, prop = 1, and now we have a nested thing. One of the concepts that drives how we think about Pkl and how we want to design Pkl is to have it closely model the target configuration so that when you read a Pkl file, it looks like the thing that you’re targeting. If you’re describing Kubernetes, for example, you don’t have to guess what that means.
Now I want to jump into what it’s like to use Pkl. Earlier in my slides, we had this. We had my made-up infrastructure as code system, and I want to show you what it looks like when you use Pkl to describe this rather than YAML. One of the concepts of Pkl is you want to describe the schema of the system. Then, once you have the schema, then you can provide the data. Let’s go ahead and do that. I’m calling this FooBarSystem.
Then, let’s take the YAML and let’s figure out what it looks like as Pkl. We have machines, so that’s a property, and we can go ahead and declare a property. In this case, instead of providing a value, because I’m describing the schema, I’m just going to provide a type annotation. Machines happens to accept a YAML sequence of objects, and in Pkl, the way that you describe that is to say this is a listing. This is a listing of something, in this case, let’s call it Machine.
Now that we’ve described this top-level machines thing, the editor has told us, I don’t know what machine is, and then we get some editor hints in here. Let’s go ahead and fill it out. Machines is also an object, so we’ll call that a class, and the region is a String. We can actually do a little better than that. Here, let’s assume that region is a closed set of strings. We only have so many regions. We don’t have us-west-3. We don’t have us-south or us-north.
In this case, we can just say this is us-west or us-east. This might look familiar if you come from TypeScript, or maybe Scala 3, or other languages that have union types. Then we have environment, and these are environment variables. This is an arbitrary map of string to string. In Pkl, we’ll say this is a mapping of string to string. Next, we have CPUs, which is an Int. Earlier I said, maybe it’s not valid if we want to create a machine with 10 million CPUs. We can actually say this is an int, or this is less than 64. What I’m doing here is I’m creating a type that has a constraint on it, and this constrains the set of possible values that you can provide to CPUs. Also say this is a UInt, because maybe negative CPUs doesn’t make sense either.
One more example. The next thing is memory. Memory, over here, is a string. In Pkl, we can do a little better than that. We can say memory is a data size, and data size is a primitive that’s built right into Pkl. What this means is you don’t need to guess how that system represents 4 gigabits, because you don’t write a string, you use the primitive that’s built into Pkl.
I’m going to skip ahead, and I’m going to show you what it might look like in an actual code base. Typically, you would take a little bit of time and make this somewhat polished so that when you later use this, it’s clear to your developers what all of these things mean. I’m going to go ahead and copy and paste that to here. Then I’m going to blast through the rest of this real quick. We have CPUs. Here, I said it’s a UInt8. Then we have healthchecks. Healthchecks has a port, that’s a UInt16. The type, again, it’s a string literal union. Then the interval, on the YAML side, we have 5. What is 5? What does that mean? In Pkl, we can say this is a duration. Again, that’s a primitive type within Pkl. Then, finally, we’re going to say the output of this module is YAML.
Then when you encounter a data size, this is how you should turn that into YAML. When you encounter a duration, this is how you should turn this into YAML. In this case, a data size is turning into a string. Duration is turning into an int. One of the analogies that you can use to think about Pkl is when we defined this, we’ve just defined a form. Then when we define the data, we’re filling out that form. Let’s go ahead and do that. Here’s another Pkl file, myConfig. It starts with amends FooBarSystem. Amends is the secret sauce for a lot of how Pkl works. Amending says, I am an object that is like this other thing except with more things. In this case, the other object is FooBarSystem. Let’s go ahead and fill out the form that we just defined. This has a top-level property called machines. As I fill this in, I know what everything means. I want to declare two machines. The first one, the region is us-west. The environment has HTTP proxy. I’ll go ahead and copy and paste. Then we have CPUs, which is 4. If we said -4, then we get an error.
Constraint violation, I expected this thing, but you provided -4 in this. This blows up on you. One of the great things about doing it this way is you get immediate feedback. Not just for this is an int and it’s supposed to be a string, but this is supposed to be in between 0 and 64, or 0 and whatever. You get refined validation errors built right into the language. Then the memory is 4 gigabits. The healthchecks, I want one healthcheck where the port is 4050, type is TCP, the interval is 5, except 5 doesn’t make sense in this case. We’ll say healthcheck every 5 minutes, which is actually a crazy healthcheck number, but we’ll go with it.
Then, now that we have this, I can use the Pkl CLI and I can eval this, and I end up with almost exactly the same YAML. I have one more machine to go. This other machine is almost the same. I’m just going to go ahead and copy and paste it. The only difference is the region is us-east. Then the environment variable is also calculated in terms of the region, in this case. We’ll eval this again. Now we get the same input. These two YAML files are structurally the same. This is what it looks like to write it in Pkl.
We can actually do a little bit better than this. Like I said, both the us-west region and us-east are almost the same thing. Again, we can use amends as a way to simplify and help ourselves a little bit. Because when you amend, you say, I’m like the parent object except with these things. In this case, I’m going to go ahead and create a new property. Here I’m saying it’s local. Creating a new local property of type machine. I don’t want the region here because it differs from my two downstream things.
Then, the environment variable is also different. In this case, we can actually already use it. Later when we amend it, we render in terms of what we’re amending. Go ahead and do that now. The syntax says amends, but this is an amends expression. When you have baseMachine wrapped with parentheses and then an object block, you’re saying this object is just like this guy except the region is us-west. We can do the same thing here. This object is like the same guy, except the region is us-east. We’ll go ahead and eval and we get the same error. I want to show what it looks like to make an error. I’m going to make an error here. If I make an error, then I get some helpful message saying, this didn’t make sense and here’s where the error is. That’s part one.
We’re going to move into a real-world scenario. In this case, I’m going to show what it might look like to use Pkl to configure something with Kubernetes. Here’s part two. Within part two, I have a file called pkl:Project. I’ve declared two dependencies. These dependencies are packages that are published to pkg.pkl-lang.org. You can create your own packages and publish them at will and that works super easily. I have a pkl:Project with two dependencies, Kubernetes and Prometheus. Then here, again, I’ve defined the schema for my configuration.
In this case, the schema has Prometheus as a property to fill in. We’ve defined some things that are defined in terms of Prometheus. For example, we have a configMap and we have deployment. Prometheus is a bunch of things, but one of the things it does is it’s a scraper that you can deploy somewhere and it can scrape metrics and send it off to some server. One of the ways that you can deploy a scraper is you create a deployment and then you create a prometheus.conf that configures the scraper.
Then you deploy it, apply it to Kubernetes, and then you have your scraper running. That’s what we’re doing here. We’re creating a configMap and a deployment. We’re defining the configMap in terms of Prometheus. The configMap’s data has prometheus.conf, and the value is the textual output of Prometheus. Think about what you would do if you were defining YAML. If I were to do the same thing in YAML, I would again create a configMap with a prometheus.conf. Then, in YAML, I just have a string. Then within the string, the editor just says, this is a YAML string. You lose context there, you don’t know that this is a prometheus.conf. In Pkl, we can go the other way around. We can define an abstraction that says, if you want to deploy Prometheus, this is what you define, you define the Prometheus configuration and then I’ll take care of everything else for you.
We’ve defined the form, and now we can fill out the form. Here’s, again, another file. It amends Prometheus deployment, just like we did earlier. Then we can start filling in the form. In this case, we are creating a Prometheus scraper. This needs some scrape_configs, and we’ll go with Kubernetes scrape_configs. This is of type Listing of KubernetesSdConfig. It’s a listing, and so I need to put things inside. Here’s the thing I’m putting. Let’s continue filling this out. What is namespaces? Namespaces is a namespace spec that takes names. It takes names. We’ll go ahead and scrape foo. This is not important. I just wanted to show what it’s like to use Pkl. Compare this to what it would be if you were writing something like YAML or JSON. That would probably look like you having a browser window open and then looking up documentation.
The cool thing is because it’s part of the language, it’s part of the API of this config object, and it could just look it up just like I would if I were writing Java or Swift or something. Now we’ll go ahead and pkl eval this, and we have an error. I didn’t expect this, but I like that this is happening. There is something about my config that’s invalid. What’s invalid about it? The first scrape_configs needs a job name. This needs a job name, and again, this is called foo. Now it works. Like I said earlier, what we’re doing is we’re actually deploying two Kubernetes resources, but as a user filling in this form, I didn’t have to care about that. All I care about is what the Prometheus scraper looks like.
Then, I want to take this concept and go even further. Part three. Now we have this concept of, we create these abstractions, and we use that to deploy to the external system. Let’s keep going here. I’m going to show you what it might look like to use Pkl to deploy. We’ll stick with the Kubernetes theme, and we’ll stick with Prometheus. How you might use Pkl to manage a large-scale Kubernetes deployment. With Kubernetes, often what you do is you don’t just deploy to a single cluster. You deploy to different clusters all over the world.
Part three, imagine that that’s the root of a repo, and within part three, I have top-level directories called production and staging. Within production and within staging, you have us-east and us-west, and what we’re doing here is we’re using the directory structure to manage the complexity of our config. Here’s the same abstraction. Again, it’s a Prometheus that you fill in, I’ve added a little bit more things here. I’ve added resource requirements and the version of Prometheus that we’re deploying, and then we end up deploying exactly the same thing. This is how you might use this template, this form that I just defined. I do exactly the same thing. I create a file that amends it, and then I can fill it out. In this example, this is a Prometheus that’s deploying, but the difference is it’s deploying version 10.
Then I can also say, in us-east specifically, we want to change Prometheus in this way. Because we’re using the directory structure in multiple files, we could put the things that have to do with production in us-east in this file specifically, and we could put things that have to do with us-west in us-west specifically. Imagine you’re a team, you’re doing red-blue deployments or something, and you want to deploy version 10, deploy that and have that go up for a little bit of time, then you just come here and you say version 10, and then you apply it.
Notice, this file starts with amends “…”. Amends, again, is the thing where it means, I am an object like this guy except with these qualities. In that case, this guy, what “…” means is the first file with the same name in the directory ancestry tree, so it just goes up until it finds another file with the same name. You will go ahead and follow that, and that comes here. This is another file that also amends “…” and this says the resource requirements for all of production should have 10 CPU requests and 8 gigabits for memory.
Then, if I wanted to change something that affects all of production, then I just add that here. As you separate things into multiple files, then it becomes really easy for you to figure out, where is that complexity managed? Does it have to do with production? Then I just go into production and / whatever. In this layout, I have Prometheus that extends component, and component, in this case, is a building block to build a logical set of Kubernetes resources, so another component could be deploying Redis, for example. It could be deploying your bespoke application. This model lets you come in here, create something that extends component, and just define the knobs that you care about for that thing. Then, again, we’ll go ahead and eval that, and we get the same thing. We once again get YAML. If I wanted everything that had to do with production, then I can shell glob that, and I get all the production stuff. That’s part three.
So far, what I’ve talked about is how you can use Pkl to target external configuration, and that’s just one of the ways that you can use Pkl. If your external system doesn’t know how to speak Pkl, then you can use Pkl to render a format that that thing speaks. For example, Kubernetes doesn’t know anything about Pkl, but that’s ok, because we could just render that into YAML, which Kubernetes does understand. However, we also provide libraries for languages. If you’re creating an application, you can think of Pkl as just a library. We maintain libraries for Swift, for Go, for Java, and for Kotlin. Then we have an extension point, and we have an amazing community that’s provided a bunch of bindings for a bunch of different languages out there. I’m going to go ahead and clear that. I want to show what it might look like, I’m going to pick on Java in this case. If you’re a Java developer, this might look familiar. I have a build.gradle. Then, within the build.gradle, I have the Pkl plugin.
Then, I’m using the Pkl plugin right here. javaCodeGenerators, this is interesting. Within this application, I have source main resources and source main Java. If we go into source main resources, I have yet another template, but in this case, this describes the config of my application. Here we have defined the host, the port, the databases, and that’s a listing of database connection. Then, I’ve defined a Java code generator, and I’m going to go ahead and run that. I’m going to call gradlew config Classes. I’m running a task that takes the Pkl source code, and it turns it into Java. Now that I’ve run that, I now have Java available. This is the same thing that we’ve just described in Pkl, except it’s Java.
Now, in the actual application, I’m using the Pkl library to evaluate that file as an instance of a Java class, and that’s here, so that looks like this. It uses the ConfigEvaluator, calls the evaluate method, and then it converts the result into an AppConfig. Down here, we’re loading the AppConfig in my main function, and then I’m just printing. This is a demo anyway, so it doesn’t actually do anything except for printing the line, but let’s go ahead and run it. Here we go. This is the result, indeed, it is listening on localhost 10105, except it’s not. What this means is, in Java, you also get type safe config. You don’t need to worry about what the properties are. You don’t need to call .getProperty and cast it to a string, and hopefully it works, and hopefully somebody didn’t misconfigure it, because in Pkl, that’s type checked. If this evals successfully, you get valid data. Then, in Java, you get type-safe accessors, so you have .host and .port, and then if you call wrong property, then you get a compile error.
Participant 1: You end up with an instance of a Java class that contains the actual configuration?
Chao: Yes, that’s just a Java POJO.
Summary
Pkl’s power is it can meet you with your needs as first-party config and as third-party config. Pkl is just one logical application. It’s one program. For example, what this lets you do is you can manage your infrastructure in Pkl, and you can eval the same stuff directly in Java, so you can make sure those two things don’t go out of sync with each other. Because it’s one language that you have to learn, you can manage all the complexity of configuration directly in Pkl rather than spread it out in different places, and that’s what we hope Pkl becomes. We really like it, and I hope you like it too. We’re just getting started.
Resources
If you want to get involved and learn more about Pkl, here are some things for you. Our website is pkl-lang.org, and on there you can learn all about the language. There’s a tutorial that you can go through. There’s a language reference. There’s links to all the libraries and all the things that we do. If you want to get involved in development of Pkl, please do. We love pull requests. We love our contributors. It’s at github.com/apple/pkl. Then, we also have an awesome community of users already. You can go to pkl.community. This is not managed by us, by Apple. This is maintained by other people. On there, you can find a Discord, and some of the maintainers hang out on there too.
Questions and Answers
Participant 2: These Pkls are really well-suited, I think, for distribution throughout an organization, is there any mechanism that is supporting that? For instance, you don’t want to copy files, but you want to make maybe reference to a library that contains any Pkl files.
Chao: Earlier I showed a pkl:Project file with dependencies. Those dependencies are called packages, and you can create your own packages and publish them anywhere you want.
Participant 2: There is no particular format?
Chao: There’s a format. You’d use the CLI to create packages.
Participant 3: I’m just curious if you could talk about some design goals of this compared to CDK and the AWS Constructs library, because it feels like there’s a lot of similar goals, but I’m sure that you have some different ones in mind as well.
Chao: CDK is focused on a particular use case. CDK is meant for describing infra. That’s AWS?
Participant 3: CDK is built on a library called Constructs that’s more general purpose.
Chao: Maybe the bigger question is, I’ve described these things where you can use a programming language for config, but there’s also libraries that use Go, or Python, or TypeScript as a DSL for config, why would you use Pkl instead? I think one of the reasons that you would use something like Pkl is because if you have a polyglot organization where you have developers that use Java, that use other things, and you try to convince them, to configure Java, why don’t you use the CDK in Go? That’s going to be a hard sell. I think another thing is, unlike Python and TypeScript and other languages, Pkl’s designed for config, and so it has a lot of things that are lacking in those languages. It’s purposefully lacking things that are available in those other languages. For example, I showed type constraints. This is a port that should be between 10 and 63. You could describe that in Pkl, and the type system understands that, whereas TypeScript doesn’t.
Participant 4: I think you talked about publishing packages and having a repository. Are there well-known packages to define things? I know I’ve had to deal with Envoy config, for example. Is there something that predefines all the types that I can just reference and then navigate through and understand how to configure it?
Chao: We maintain packages, and we have a doc site that you can go through to look at all the packages that we maintain. Then, we also have code generators that take, for example, JSON schema. If you have schema already written in JSON schema, you could just generate Pkl from that.
Participant 5: I’m just curious if you can talk a little bit more about something like Pulumi, and what are some reasons why someone would use Pkl over that?
Chao: I think that relates to the first question, which is, why would you use Pkl over Python or TypeScript? Which is, Pkl is a language designed for config, and it has a lot of features that don’t exist. Then, it’s also a lot more portable. If you’re a Java developer, you’re probably not going to want to use Pulumi to configure your Java app. Pulumi is also multiple things. It’s like SDKs plus the Pulumi engine. It’s apples to oranges. You can use Pkl to configure Pulumi too, because they have a YAML spec.
See more presentations with transcripts

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB (NASDAQ:MDB – Get Free Report) was downgraded by research analysts at Loop Capital from a “buy” rating to a “hold” rating in a research report issued to clients and investors on Tuesday, MarketBeat reports. They presently have a $190.00 price target on the stock, down from their prior price target of $350.00. Loop Capital’s target price would suggest a potential upside of 0.52% from the stock’s previous close.
A number of other brokerages have also commented on MDB. Macquarie decreased their price target on shares of MongoDB from $300.00 to $215.00 and set a “neutral” rating on the stock in a report on Friday, March 7th. China Renaissance assumed coverage on MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 price target on the stock. Truist Financial cut their price objective on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Stifel Nicolaus decreased their price objective on MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a report on Friday, April 11th. Finally, Wells Fargo & Company lowered MongoDB from an “overweight” rating to an “equal weight” rating and cut their target price for the stock from $365.00 to $225.00 in a research note on Thursday, March 6th. Nine equities research analysts have rated the stock with a hold rating, twenty-three have given a buy rating and one has issued a strong buy rating to the company’s stock. Based on data from MarketBeat, the company currently has an average rating of “Moderate Buy” and a consensus target price of $288.91.
Check Out Our Latest Research Report on MDB
MongoDB Trading Down 1.2%
MongoDB stock opened at $189.01 on Tuesday. MongoDB has a 1-year low of $140.78 and a 1-year high of $379.06. The firm has a market capitalization of $15.35 billion, a PE ratio of -68.98 and a beta of 1.49. The firm’s 50 day simple moving average is $174.77 and its 200 day simple moving average is $238.15.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business had revenue of $548.40 million for the quarter, compared to the consensus estimate of $519.65 million. During the same period in the prior year, the business posted $0.86 EPS. Research analysts forecast that MongoDB will post -1.78 earnings per share for the current year.
Insider Buying and Selling
In related news, CFO Srdjan Tanjga sold 525 shares of the stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $90,961.50. Following the completion of the transaction, the chief financial officer now directly owns 6,406 shares in the company, valued at $1,109,903.56. This trade represents a 7.57% decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this hyperlink. Also, insider Cedric Pech sold 1,690 shares of the company’s stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $292,809.40. Following the completion of the sale, the insider now owns 57,634 shares in the company, valued at approximately $9,985,666.84. This trade represents a 2.85% decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 33,538 shares of company stock valued at $6,889,905 over the last three months. Corporate insiders own 3.60% of the company’s stock.
Institutional Inflows and Outflows
A number of large investors have recently added to or reduced their stakes in MDB. Strategic Investment Solutions Inc. IL purchased a new position in shares of MongoDB in the 4th quarter worth about $29,000. Cloud Capital Management LLC purchased a new position in MongoDB in the first quarter worth about $25,000. NCP Inc. bought a new position in shares of MongoDB in the fourth quarter worth approximately $35,000. Hollencrest Capital Management purchased a new stake in shares of MongoDB during the first quarter valued at approximately $26,000. Finally, Cullen Frost Bankers Inc. lifted its position in MongoDB by 315.8% in the 1st quarter. Cullen Frost Bankers Inc. now owns 158 shares of the company’s stock valued at $28,000 after acquiring an additional 120 shares in the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.
MongoDB Company Profile
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

Discover the top 7 AI stocks to invest in right now. This exclusive report highlights the companies leading the AI revolution and shaping the future of technology in 2025.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ

Web accessibility ensures content is usable by people with disabilities. According to Joanna Falkowska it can give a competitive edge, improve SEO, and support basic human rights. She emphasizes using WCAG standard and making accessibility a shared team responsibility from the start of development, to prevent costly fixes later in the process.
Joanna Falkowska spoke about creating accessible websites at DEV: Challenge Accepted.
Web accessibility is about making web content available to users with disabilities. Falkowska suggested using the Web Content Accessibility Guidelines to improve accessibility and create an inclusive website.
Web accessibility should be considered a basic human right, said Falkowska. We should care about website accessibility because most of us either are affected by disability directly or have family members, friends or colleagues who have it, she added. Product accessibility can give you a competitive edge over other businesses.
Some companies consider accessibility as a natural consequence of their DEI policy and a basic human right. There are also those who care about accessibility primarily due to SEO results, as search engines prioritise accessible sites in their search results, Falkowska said.
Accessibility may be important for companies due to legislation. Many countries and institutions implement dedicated digital accessibility laws that concern specific institutions and/or businesses, Falkowska mentioned:
One of the most recent ones is the European Accessibility Act. It is an EU directive that will come into force in July 2025, addressing a wide range of services, among others: e-commerce, banking and transport.
The Web Content Accessibility Guidelines is a standard that is recognized worldwide. Any legal act that requires conformance with accessibility rules quotes WCAG as its reference point, Falkowska said. It is available on-line and completely free. There is also a thorough list of international accessibility policies available. Falkowska invited people to read it and adapt their web content according to its success criteria.
Falkowska suggested that team members should be fluent in accessibility standard, at least in relation to the success criteria that refer to their role:
For example, it is the role of the designer to address all of the colour contrast issues there may be but also, if the authoring team has some flexibility, they should be aware of the colour contrast rules they need to follow in order for the final content to be accessible.
Accessibility issues, just as any other type of bugs, tend to be more expensive to fix in the later stages of development, Falkowska said. Therefore, development teams who want to achieve and maintain an accessibility standard, need to make accessibility part of the earliest stages of development.
Falkowska suggested making it part of the process to discuss and add precise accessibility acceptance criteria to the ticket description while grooming new stories:
Many teams do not do it, and as a result, accessibility gets added either after the ticket is rejected in the testing phase or even later – when the accessibility audit is completed.
Accessibility is undeniably a team sport. If we want to integrate it with the development process, we need to address it at all of its steps, Falkowska concluded.
InfoQ interviewed Joanna Falkowska about developing accessible websites.
InfoQ: What’s your advice to developers who want to start with accessibility?
Joanna Falkowska: If you are new to the subject of accessibility, I would recommend taking the time to read and understand all of the success criteria WCAG provides. It may look overwhelming at first but you can learn them in chunks, according to subsequent sections (guidelines) or based on conformance levels (from A up to AAA).
The second thing would be learning how to use assistive technology, especially screen readers, but also simple things like navigating with a keyboard instead of a mouse.
Finally, once you learn all of that, the first thing to do is… make friends with and, if needed, educate the design team. Many accessibility issues would not pop up at the development stage if the designs were following WCAG standards right from the start.
InfoQ: What can be done to make accessibility an integral part of your development process?
Falkowska: Make sure that accessibility is not simply “outsourced” to an accessibility team from a different department or an external auditing company.
The accessibility team should be there to support you only in the issues that go beyond the basic scope, e.g. you are struggling to decide what order to implement for keyboard navigation and need someone to share the most convenient solution.
The auditing company should come only in the last stage of the development. They are there to certify your accessibility level rather than teaching you what should have been done at the beginning of the development. We all know that changing the designs while the app is running costs much more than designing the wireframes with accessibility in mind.
Suppose we want to integrate accessibility into the development process. In that case, the product owner should raise the topic of accessibility repeatedly: right at the design stage, before the development starts, and up until the testing phase.
If your team members do not know what to consider while developing an accessible solution, you may want to request an accessibility specialist to join your team and help you draft requirements with WCAG in mind, teaching everyone what to consider during grooming sessions.
Developing accessible solutions might bring more clients to your website. Adding accessibility skills to your personal portfolio will make you a more competitive employee on the IT market. Companies that are legally obliged to deliver accessible solutions will quickly learn that it is more cost-effective to employ team members that know what role they play in delivering accessible apps.