Breaking Down Python 3.13’s Latest Features

MMS Founder
MMS Shaaf Syed

Article originally posted on InfoQ. Visit InfoQ

Python 3.13, the latest major release of the Python programming language is now available. It introduces a revamped interactive interpreter with streamlined features like multi-line editing, enabling users to edit code blocks efficiently by retrieving the entire context with a single key press. Furthermore, Python 3.13 allows for the experimental disabling of the Global Interpreter Lock (GIL), alongside the introduction of a Just-in-Time (JIT) compiler, enhancing performance though still in an experimental phase. Lastly, the update removes several outdated modules and introduces a random.

A new interactive interpreter

Python 3.13 introduces a new interactive interpreter with exciting features, starting with multi-line editing. Previously, pressing the up key would navigate through previous commands line by line, making it challenging to edit multi-line structures like classes or functions. With Python 3.13, pressing up retrieves the entire block of code and recognizes the context, which allows for easier and more efficient editing. Furthermore, by pressing F3, users can enable paste mode and insert chunks of code, including support for multi-line.

In previous versions, users had to type functions like help(), exit(), and quit(). With the new interactive interpreter, users can now just type help or F1, allowing them to browse the Python documentation, including modules. Similarly, for exit() and quit(), just typing the words exit or quit will quit the interactive interpreter. This is a common complaint from programmers new to REPL(Read-Eval-Print Loop) or interactive interpreters.

Developers can now also clear the screen using the clear command in the interpreter. Previous versions of the interpreter did not have this command, and developers had to resort to terminal configurations to achieve this behavior. Lastly, the new color prompts and tracebacks also aid in better usability for the developers.

Free-threaded

It is now possible to disable the Global Interpreter Lock (GIL), an experimental feature disabled by default. The free-threaded mode needs different binaries, e.g., python3.13t. In simplified terms, GIL is a mutex (lock) that controls the Python interpreter. It provides thread safety but also means that only one thread can be in an execution state at a time.

GIL makes many types of parallelism difficult, such as neural networks and reinforcement learning or scientific and numerical calculations where parallelism using CPU and GPU is necessary. Contention issues cause conflicts with a Single-threaded model. Free-threaded mode allows applications to consume all the underlying CPU cores by providing a multi-threading programming model.

Python has been great for single-threaded applications. However, modern languages like Java, Go, and Rust all provide a model for multi-threading, making comprehensive use of the underlying hardware. GIL has provided thread safety for Python applications, e.g. using C libraries. However, removing GIL has been challenging, with failed attempts and broken integration into C extensions. Developers are used to using the Single-threaded model for applications, and disabling GIL can cause unexpected behavior.

The Just-in-Time Compiler

An experimental JIT (Just-in-time) compiler is now part of the main branch in Python. However, in Python 3.13, it is turned off by default. It is not part of the default build configuration and will likely stay that way in the foreseeable future. Users can enable JIT while building CPython by specifying the following flag.

--enable-experimental-jit

The Python team has been adding enhancements in this direction for some time. For example, The Specializing Adaptive Interpreter introduced in Python 3.11 (PEP 659) would rewrite the bytecode instructions in place with type-specialized versions as the program ran. The JIT, however, is a leap toward a much more comprehensive array of performance enhancements.

This experimental JIT uses copy-and-patch compilation, which is a fast compilation algorithm for high-level languages and bytecode. A fast compilation technique that is capable of lowering both high-level languages and low-level bytecode programs to binary code by stitching together code from an extensive library of binary implementation variants. Haoran Xu, Fredrik Kjolstad describe the approach in-depth in this paper.

The current interpreter and JIT are both generated from the same specification, thereby ensuring backward compatibility. According to PEP 744, any observable differences in testing were mostly bugs due to micro-op translation or optimization stages. Furthermore, the JIT is currently developed for all Tier 1 platforms, some Tier 2 platforms, and one Tier 3 platform. The JIT is a move away from how CPython executes Python code. The performance of the JIT is similar to the existing specializing interpreter, while it does bring some overhead, for example, build time dependencies and memory consumption.

Other updates

Python 3.13 removes several modules (PEP 594), and soft deprecates tools like optparse and getopt, and introduces new restrictions on certain built-in functionalities, particularly in C-based libraries.Python 3.13 also introduces a CLI feature for random. Users can now generate random words from a list of words or sentences or generate random floats, decimals, or integer values, by calling the random function as follows.

# random words from a sentence or list of words
python -m random this is a test
python -m random --choice this is a test

# random integers between 1 and 100
python -m random 100
python -m random --integer 100

# random floating-point numbers
python -m random 100.00 
python -m random --float 100

The behavior of locals has also changed in Python 3.13. It now returns an independent snapshot of the local variables and values without affecting any future calls within the same scope. Previously locals() was inconsistent and leading to bugs. This change brings more consistency and helps developers during the debugging process. this change affects locals() and globals() in exec and eval.

def locals_test_func()
    x = 10
    locals()["x"] = 13
    return x

Python 3.13 introduces significant changes that enhance the developer experience. Improvements like the interactive interpreter, along with experimental features like free-threaded and the JIT compiler, promise a more robust and dynamic programming experience. The experimental disabling of GIL enables current libraries and frameworks to adjust and make changes for future releases.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MONGODB ALERT: Bragar Eagel & Squire, P.C. is Investigating MongoDB, Inc. on Behalf of

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK, Oct. 08, 2024 (GLOBE NEWSWIRE) — Bragar Eagel & Squire, P.C., a nationally recognized shareholder rights law firm, is investigating potential claims against MongoDB, Inc. (NASDAQ: MDB) on behalf of long-term stockholders following a class action complaint that was filed against MongoDB on July 9, 2024 with a Class Period from August 31, 2023 to May 30, 2024. Our investigation concerns whether the board of directors of MongoDB have breached their fiduciary duties to the company.

This page requires Javascript.

Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Turbocharged Development: The Speed and Efficiency of WebAssembly

MMS Founder
MMS Danielle Lancashire

Article originally posted on InfoQ. Visit InfoQ

Transcript

Lancashire: Let’s talk about efficiency, and specifically carbon efficiency. That is a term that until about 9 months ago, if you said to me, I would have no idea what you were talking about. I imagine that’s potentially true for many of you when applied to the software space. The Green Software Foundation came up with a method of measuring carbon intensity for your software, called Software Carbon Intensity. That is measuring the amount of resources used over a certain number of requests to a functional unit of work. This is a very simple version of that equation. It actually looks something more like this.

When a coworker sent me this, my response was, what? It turns out, the letters actually mean things, obviously. It’s the energy consumed by the software system, times by a marginal rate for where you’re getting your electricity from. If, like me, you live in Germany, and everything is cold, that goes really big, and it’s a sad time. Plus the embodied emissions of the hardware used to run your software. If you’re constantly buying new servers, number go up. If you do things like some interesting work that came out of, I think Telefonica in Spain, where they pulled in a bunch of old Android phones to build mini edge data centers on old phones, that’s one way you can start being more efficient. Divided by the functional unit of processing an order in your e-commerce store or whatever else.

What Can You Do?

I think we all agree that we want to use less carbon and save some of our planet’s resources while we still can. There are a few actions that you can do. We’re going to mostly focus on two of them. One of those is energy efficiency, so using less energy to do the same work, and hardware efficiency, using less hardware to do the same work. They are surprisingly, not the same things. Then, if you have done all of those things, or have workloads that just require a lot of energy and you can’t really do anything about it, then you can time or shift where those workloads happen to use more efficient energy.

Should I just write everything to be really fast? Is that going to solve all of my carbon problems? No. This comes from a paper on energy usage of different programming languages. Java, for example, is fifth place for both energy consumption and time, but uses six times more memory to do that work than C, which is a lot more hardware that you need to do that kind of work. You need to look at a lot of these things more holistically, to understand what works for your software. Even if you wrote everything in C, in like really efficient, beautiful C with totally no memory bugs, Pinky promise, no memory bugs, it still doesn’t tell the full story, because very few individual applications use a whole CPU all of the time. We have workloads that shift throughout the day, whatever.

Most servers use 30% to 60% of their maximum power when they’re doing nothing. That’s a lot of power. Improving compute density often matters more than the efficiency of a single application. Some of you are thinking, I put my apps in containers, I run them in Kubernetes. Is that enough? Am I done? Can I go home now? Planet saved. Seventy percent of CPU is unused in most cloud Kubernetes deployments. That is a lot of C, over not a lot of R.

If we take a look at the evolution of compute. We started out deploying to bare metal. Scaling up was making a purchase order from Dell, waiting. Your users were sad. Then they gave you some more servers and you rack them in your data center. Then we decided that running single purpose servers was really dumb, and computers got better, so we could do virtualization. Then we could run multiple applications with a whole copy of an operating system and a kernel, and all of the overhead for everything. It was still better than running on a server. Then, containers came along, and we shifted that down, we started sharing the kernel. Then we started shipping a whole userland with every application anyway, and now we have containers that can be several gigabytes in size. It sucks. Whenever OpenSSL needs to be patched, and you need to go patch all of your containers like now, it’s about time.

WebAssembly (Wasm)

My very leading question marks, let’s talk about WebAssembly. What is WebAssembly? WebAssembly is basically just a portable compilation target, originally designed for web browsers, so you could run things that weren’t JavaScript in the browser with your JavaScript. It gives you a memory safe sandbox execution environment for any language, which gets really interesting, especially when you combine it with WASI. WASI is a set of portable, modular WebAssembly native APIs that can be used by Wasm code to interact with the outside world. That gives you things like your boring, typical access to files, file systems, whatever. It also gives you generic interfaces over things like key-value stores, queues, HTTP applications, everything. It means that you can change out different components of your application without having to change your code, which is really cool the first time you see it live, and it works.

What Makes Wasm Great?

What makes Wasm great? Wasm binaries are really small, because you’re just shipping your application’s code, not 3 million copies of OpenSSL, a kernel for funsies. They’re a little bit bigger than just building an x86 binary, but not many people are just shipping binaries to production. They’re really fast to start. Unlike pulling a container where you then have to go and build a whole layered file system, and do a bunch of startup and stuff, you can start and shut down a WebAssembly module in under a millisecond. That’s not after you’ve pre-warmed up some execution environment. That’s, you pull the WebAssembly. You exec it.

You close it down, and it’s a millisecond. That’s really cool. They’re also portable. I have a Kubernetes cluster, that’s an Intel NUC, like an AMD NUC, and a RISC-V board. It’s one cluster, and you deploy the same bit of code, and it runs on all of them. They’re also well isolated. With sandbox execution, linear memory, and capability-based security, it’s really hard to mess up that badly. WebAssembly slots in here, where you have your application running in some WebAssembly execution context, like Spin that I’ll be talking about a little later, on the host. If you need to patch things like OpenSSL, like I said, you do it once on the host, not in all of your applications.

Serverless

Let’s talk about serverless, or as we used to call it, FastCGI. What is serverless? A lot of talks that talk about serverless don’t really talk about what it is. Serverless means different things to different people. Most of those things fit into two different buckets, a type of application and a development model. I think about it as the concept of building something that’s event driven, ephemeral, and mostly stateless. If you are building serverless WebSockets, for example, your gateway handles the state management of socket management and keeping those sessions alive, and defers to your code only when it needs to do event handling. You can deploy those independently. A development model where you focus on your business logic, and leave all the cruft to the platform, and you don’t really need to think about where it’s going to be running.

That doesn’t really answer why. There are still servers running your code. Why can’t I just manage those myself? Giving you the benefit of focusing on business logic means you spend less time dealing with things that don’t matter to you. Especially if you work in a larger organization, you can bury a lot of the complexity, and let average developer be very happy, write that business logic, move on with their day. It’s a good time. Gives you more flexible scaling, because you don’t have to handle all of that state management in your process, and because those binaries are really small and really fast to start, you never have the complexity of needing to wait 10 minutes for a container to boot and build its caches.

Or, if you’re deploying some types of software, that might be more like 6 hours. We’ve all been there. Because WebAssembly modules don’t have the concept of a cold start, and because you don’t need to do all of that stuff, in an ideal serverless world, you can get a lot more increased density, because you’re doing a lot less stuff in your application.

Early serverless is backed by mostly micro VMs and containers. That’s a problem, because those containers, and if you have images, tend to be really large, because you’re not just shipping your code, you are shipping a whole Linux distribution in a box. That means that not only do they take a lot of disk, and take a long time to download to every instance where they need to run, but they can also take seconds to minutes to insert however long your cache warming takes to be ready to serve traffic. That means you often scale up far more than you need. That reduces the overall density of your clusters. That really sucks.

A lot of people’s things aren’t actually in that high demand. Eighty-one percent of Azure functions are invoked once per minute or less, but they stay running all of the time, with whatever thing is serving the traffic and also your thing. That can be prohibitively expensive. Even more amusing to me from a waste perspective is that 45% of them are invoked once per hour or less, and they stay running all of the time. That’s really sad, especially in the line of Kubernetes where your average cluster can do 100 pods a node, but most of them do more like 20 to 30. That is a lot of servers, for not a lot of compute. A lot of serverless solutions are also really vendor specific, and have poor local development stories, and poor operational stories when you want to do things like continuous delivery, or continuous integration.

What would our ideal serverless look like? It would be language agnostic. Not everyone wants to write Rust. Not everyone wants to write JavaScript. There’s a lot of everything in the world. It should have great developer experience. The way you run things on your laptop shouldn’t be so different from how they run in production that they don’t look the same. At the same time, you shouldn’t have to spin up a Kubernetes cluster to do your job. They should be well isolated for multi-tenancy, so that you can increase density across all of your clusters. They should be cross-platform. Shouldn’t have to run on x86 Linux.

Most of us now have Arm laptops, but probably still deploy to x86 servers. Although with most modern toolchains, that doesn’t always break, I spent enough time fighting Cargo cross in the last 2 weeks to know that it’s still not fixed for everyone. Sometimes you want to deliver to Windows for reasons. Or maybe you’re doing some weird thing where you want Apple Silicon in the cloud. I’ve seen weirder. You should be able to do that too. Portable across all of those platforms. It should also be efficient and scalable. You shouldn’t be running one function per node. Ideally, you’re running 10,000, if that’s what you need.

That looks a lot like WebAssembly? Language agnostic, so anything can target WASI. Language support is constantly improving. At Fermyon, we maintain a list of the state of WebAssembly support in a bunch of languages. Right now, there’s pretty good support in things like JavaScript, Python, Rust, Go, TypeScript. We actually just released a new JavaScript runtime based on SpiderMonkey. It’s getting surprisingly good. Running JavaScript in production without having to worry about memory leaks is something that made me finally learn JavaScript. We built a tool called Spin. It’s open source. It’s a framework and developer tool for building and running WebAssembly applications. There is a demo. You do, spin new, you pick a template, hello-qcon. We’re going to write a TypeScript application.

I’m going to do the npm install dance, take a look at the code. We’re going to then use spin watch. What that’s going to do is watch our code and restart the application every time our code changes. If we call localhost and I type out, hello world, but if we change this to Hello QCon, save. Then, hello-qcon. Getting that kind of feedback while also running in the same way you would be in production is really neat. It should be well isolated for multi-tenancy. We do that using Wasmtime, which is a WebAssembly runtime as part of the Bytecode Alliance, which is a foundation with a bunch of people who work on WebAssembly stuff coming together to build all of the common tooling for everyone. Wasmtime is written in Rust, but has bindings for C and some other stuff. If you want to embed it in your own applications to build things like plugin systems, we have WebAssembly.

Cross-platform, we have built a Spin for Arm, x86-64, and even RISC-V. You can run WebAssembly in any kind of thing you want. A few examples here, you can just shove it in systemd. systemd is great. Also, in things like Kubernetes, or even Nomad. We recently released SpinKube that we’re in the process of joining the CNCF with to simplify running your Spin applications on any Kubernetes cluster in any of the clouds, which is awesome. It should be efficient and scalable.

Demo

Open http://finicky.terrible.systems. We’re going to go feed a cat together with WebAssembly. When you get there, you should see something like this. Then, you can either play as Ninja the dog or Slats the cat: our adorable, lovely mascots. I’m going to play as Slats. Then you have to feed the cat whatever it wants. Any time you do anything, that’s executing a WebAssembly module in a Kubernetes cluster, while also runs a WebAssembly module in our Kubernetes cluster, that’s a mix of AMD and x86 nodes. If I come look over here, kubectl top, we get a bit of a live set of metrics off those pods. As people play the game, we potentially see load. This should autoscale. It relies on people being really active in the game. That application is actually really cool. Because your average application is written maybe in one programming language, but within your organization, there are many.

This application is written in a mix of Rust, Ruby, and JavaScript. It all runs together in a single WebAssembly module, that just as I ran in the cloud before, I can also run on my laptop. As you can see, the different modules are shown and where they’re mounted to in the application. If I just go to here, it’s the same application we were just playing, but the data is written to a SQLite database on my laptop, as opposed to Redis and a real database in production with no changes to the application. Apparently, some things were broken somewhere, locally. That’s ok. I probably didn’t run database migrations, because we’ve got to have some problem somewhere. I believe some people call it job security.

Questions and Answers

Participant 1: That interesting table you had of the 20 programming languages, there was a letter in brackets before each language name. That wasn’t clear to me what that letter indicated, was I or C, it could have been interpreted or compiled, maybe?

Lancashire: That one is giving you the ranking in energy based on everything, so you can do the relative position, and which is interpreted and compiled.

Participant 2: If I were to run a Java application in WebAssembly or WASI, for example, I have used Spring Boot as Quarkus, how would I do that?

Lancashire: Java specifically is a very complicated question in WebAssembly. Right now, targeting Java to WebAssembly involves using a project called TeaVM, which was originally written to enable Java in the browser and eventually got WebAssembly support. The problem is, is it doesn’t use the OpenJDK class library, it has its own. Only a very limited subset of things in Java specifically work right now. It’s a thing that for a lot of reasons we want to improve, but working with Oracle is a process. We don’t really know what our path forward is as a community right now. It might be that we invest a bunch of time in TeaVM.

Participant 3: What kind of resources can the workers access like local storage, network ports? How do you sandbox this whole thing? How do you open?

Lancashire: What kind of WebAssembly module access, and how does that work with things like sockets?

This is a really interesting part of WebAssembly. By default, a WebAssembly component can do nothing. You have to give it the capability of doing those things. You can give it a file system, so it can access local files. You can transparently to that application, also give it a virtual file system, or block or file system access. It can only do what you give it. For things like sockets, we give you the ability to configure the specific things that a module can connect to. For example, you can say that your frontend application for your consumers can talk to a specific backend, but it can’t talk to your back office secret stuff.

We’ll guarantee that at a runtime level. One of the more interesting things that’s happening there is, we’re expanding that security model to be able to give sealed data to components, so you could do things like, in your OpenAPI definition, mark data as PII, and then only give the PII sensitive data to components that are allowed to have it. If you pull in a library for doing analytic stuff, or whatever, you can give it the other data but not the PII data. You can guarantee that at compile time of your application. We’re not quite there yet. When we are, it’s going to be awesome.

Participant 3: Have there been any standard performance benchmarks run on Wasm, or maybe like these kind of efficiency benchmarks as well, that you saw, like you mentioned just before?

Lancashire: Have there been any benchmarks done on WebAssembly runtimes, not just the sort of density efficiency stuff?

The answer is yes. There was a research paper last year that looked at, I think, five different WebAssembly runtimes versus natively building the same application. I think it was something like a 1.3x to 1.5x was in at runtime. A lot of advancements have been made in the runtime efficiency since then. It’s a relatively negligible overhead for what I think is quite a nice benefit. It’s getting better all the time.

Participant 4: I think Docker and containerd is also working on some kind of integration with WebAssembly.

Lancashire: Yes, containerd. Docker were the first. They ship Spin as part of Docker Desktop. If you have Docker on your machine, you can Docker run a Spin application, and assuming you have the right bits of Docker enabled, should just work, which is really cool. The containerd project has a thing called runwasi, which is a library for building containerd shims, so you don’t have to run the full container thing to run WebAssembly applications.

As part of SpinKube, we have a containerd shim for running Spin applications, where we’ll also do things like cached recompilation of your application. The first time we schedule our node, we’ll compile it, store in container these content store, and then as you scale up on that node, you won’t have to do the WebAssembly to native code compilation again. It’ll reuse the same artifact. Which means you can go from a single pod to 200, kind of like that.

Participant 5: I seem to remember that in the browser, at least, WebAssembly was limited to one thread. Is that still something that’s limited here, or is that not a thing?

Lancashire: Technically still, yes. There’s more support now for doing Async/Await type things. Wasi-threads is getting very close to being at least ready for most people. Assuming you’re offloading a lot of your tangential stuff like observability to the host, there’s not a lot of reasons why you’d want in most event driven things to need threads anyway. Wasi-threads should be shipping this year.

Participant 6: If you use WebAssembly, you can reach the kernel space to use, for example, BPF probes or something?

Lancashire: Should just work.

Participant 6: If your WebAssembly application can use the kernel probe?

Lancashire: It can’t access the kernel unless you give it access to the kernel. You could set up bindings for things, but by default, it can’t do anything.

Participant 7: You talked about that you can compile Python to WebAssembly. Usually, if you’re doing intensive things in Python, you’re calling into C libraries, so NumPy would be the most obvious example here. Would you be able to take like a script using NumPy and compile that to WebAssembly?

Lancashire: You say you can compile Python to WebAssembly, how does that work when you have C extensions like NumPy?

Some things work and some things don’t work yet. With NumPy specifically, I’m not sure. My colleague Joel started a project porting a lot of C based wheels to be able to build for WebAssembly as part of his work on componentize-py, which is the thing that takes Python and builds WebAssembly components. For NumPy specifically, I’m not sure. There’s no reason why you shouldn’t be able to. It’s just a case of configuring build systems, just a build system. It’s all fine. I spent too long looking at LLVM.

Participant 8: I have a question about WebAssembly. Mostly [inaudible 00:35:56], for example, on containers, vendors as well for traditional containers. They are using native instructions so they can be optimized, for example, on Intel to use AVX-512, or its equivalent for RISC-V on Arm. Why does somebody leverage this, because the hardware is not huge for standard binary build by distributions that are meant to be compatible with old CPUs of the same architecture. When you go down to optimizations of binaries to use specific instructions, maybe the difference of overhead can be bigger.

Lancashire: How do you handle machine specific instructions with WebAssembly when you want to do specific optimizations for what you’re running?

WebAssembly already has support for things like SIMD, and some other machine level things. It means your application says, I need this stuff to run. If the runtime can’t provide it, it can’t run that. The other is, because your WebAssembly application goes through a crane lift optimization path when you run it, you can actually specialize a lot of existing software to use better instruction sets when it can anyway. If you really care, you might not want WebAssembly still, but for a lot of people you probably get some extra performance for free. A lot of the SIMD and stuff is coming from browsers wanting SIMD2.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


University of Chinese Academy of Sciences Open-Sources Multimodal LLM LLaMA-Omni

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Researchers at the University of Chinese Academy of Sciences (UCAS) recently open-sourced LLaMA-Omni, a LLM that can operate on both speech and text data. LLaMA-Omni is based on Meta’s Llama-3.1-8B-Instruct LLM and outperforms similar baseline models while requiring less training data and compute.

The LLaMa-Omni architecture extends Llama-3 by including a speech encoder at the input and a speech decoder at the output. Compared to other schemes where standalone speech recognition (SR) and text-to-speech (TTS) modules are used in series with an LLM, this architecture reduces the latency between an input speech prompt and output speech generation. The model is fine-tuned on InstructS2S-200K, a custom dataset created by the UCAS team, which has 200 thousand speech prompts and their expected speech replies. According to the researchers:

Experimental results show that, compared to [baseline] speech-language models, LLaMA-Omni delivers superior responses in both content and style, with a response latency as low as 226ms. Moreover, training LLaMA-Omni requires less than 3 days on 4 GPUs, enabling rapid development of speech interaction models based on the latest LLMs. In the future, we plan to explore enhancing the expressiveness of generated speech responses and improving real-time interaction capabilities.

The research team evaluated LLaMa-Omni’s performance on two tasks: speech-to-text instruction-following (S2TIF) and speech-to-speech instruction-following (S2SIF), and compared it to other baseline models, including Qwen2-Audio. The evaluation dataset was a subset of Alpaca-Eval, with a total of 199 prompts; the team also fed the prompts into a TTS system to generate speech-based prompts.

The team used GPT-4o to automatically score each model’s output, judging it on content (whether the output achieves the user’s instruction) and style (whether the output is suited for speech interaction). On the S2TIF task, LLaMA-Omni outperformed the baselines on style, and on the S2SIF task, it outperformed on both content and style.

In a discussion about LLaMa-Omni on Hacker News, one user pointed out the benefits of an end-to-end model for speech and text, vs a cascaded system of standalone components:

Essentially, there’s data loss from audio -> text. Sometimes that loss is unimportant, but sometimes it meaningfully improves output quality. However, there are some other potential fringe benefits here: improving the latency of replies, improving speaker diarization, and reacting to pauses better for conversations.

Users on Reddit also commented on the model, especially its use of OpenAI’s Whisper model for speech encoding:

[T]heir input approach is similar to how LLaVA added image understanding by training a glue layer for Llama and CLIP. LLaMA-Omni takes whisper as their encoder like LLaVA takes CLIP. Then the embeddings are projected into the feature space of their underlying Llama model. I didn’t immediately understand their voice output architecture so I can’t comment on that.

The integration of speech I/O into LLMs is a growing trend. Earlier this year, InfoQ covered the release of OpenAI’s GPT-4 omni, which is a version of GTP-4 that is trained end-to-end to handle speech data. InfoQ also covered Alibaba’s open-weight Qwen2-Audio, which can handle speech input but only outputs text.

The LLaMa-Omni model files are available on Huggingface.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Rspack 1.0 Released, 23X Faster than Webpack, Compatible with Top 50 Webpack Plugins

MMS Founder
MMS Abimael Barea Puyana

Article originally posted on InfoQ. Visit InfoQ

Rspack, a new JavaScript bundler that strives to be fully compatible with Webpack, is now production-ready. Rspack 1.0 is compatible with 40+ of the top 50 Webpack plugins. 

Rspack credits Rust for its performance and touts a 23x build time improvement over Webpack. The company behind Rspack, ByteDance, mentions saving millions in CI costs.

ByteDance released Rspack 1.0 18 months after its first public version. ByteDance uses Rspack in applications like TikTok, Douyin, Lark, and Coze. ByteDance claims enterprise users like Microsoft, Amazon, and Discord are adopting it, although the number of their products using it hasn’t yet been revealed.

The Rspack name reveals two primary design decisions: it’s written in Rust and is compatible with Webpack. The migration guide shows how Rspack shares the configuration with Webpack, simplifying the migration process.

Rspack claims to be compatible with 80% of the top 50 most downloaded Webpack plugins. The Module Federation plugin, used by many firms in the enterprise segment as part of the micro-front-end architecture, has been detached from Webpack. Module Federation 2.0 is backwards compatible and supported by Rspack 1.0. 

When running the Turbopack’s bench cases (of 1,000 React components), Rspack finishes a production build in 282 ms, while Webpack 5 with Babel does it in 6523 ms (23x slower). The Rspack team also credits Rust for achieving lower start-up time. One Reddit developper mentions the ability of Rust to tap into multi-core CPUs as an additional source of performance improvement:

Highly parallelized architecture: webpack is limited by JavaScript’s weak support for multithreading. By contrast, Rspack’s native code takes full advantage of modern multi-core CPUs.

Some developers independently claimed to do a production build in 8.1s, while Webpack took 19.1s (2.35x slower). In a later update, they refer to the known Rspack performance bottlenecks in the documentation, explaining that plugins such as postcss-loader or html-webpack-plugin can affect Rspack performance.

Rust-based bundlers improve the performance and speed of JavaScript development, allowing for faster releases and reduced CI costs. Zackary Jackson, creator of the Module Federation plugin, Module Federation 2.0, and now an infrastructure architect in the Web Infrastructure Team from Bytedance, claims that using Rspack saves a huge amount of money in infrastructure costs.

We will NEVER monetise rspack or its family of tools. They are free and open. For all time, always. The ROI of just using them for ourselves is in the hundreds of millions.

Rust’s impact on cost reduction is not exclusive to the JavaScript ecosystem. AWS uses it in multiple projects, helping the company improve efficiency and achieve its sustainability goals. 

The experiments sections of the Rspack website provide a glimpse of what’s next. Features include compatibility with Top Level Await, the stage 4 TC39 feature, or compilation improvement by only building the accessed entry points. The Rspack team has other elements in the roadmap, such as faster hot module reloading (HMR), TypeScript-based optimisation, or support for React Server Components.

Rspack 1.0 is one of the products created by the ByteDance Web Infra team, alongside Rsbuild, Rspress, Rsdoctor, and Rslib. This release also introduced a brand-new website and improved the existing documentation. 
 

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ASP.NET Core OData 9 Improves Performance, Drops .NET Framework

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

Microsoft announced the availability of the ASP.NET Core OData 9 package on August 30th 2024. The new package brings ASP.NET Core in line with .NET 8 OData libraries, changing the internal details of how the data is encoded in the OData format. According to Microsoft, it brings it more in line with the OData specification.

Earlier in August 2024, Microsoft updated OData .NET libraries to version 8.0.0. The most important changes in it is the dropping of support for legacy .NET Framework. From this version on, only .NET 8 and higher will be supported. The developers using the legacy .NET Framework can still use the version 7.x of OData libraries, which are still actively supported until March 2025, when they will be in maintenance mode.

The OData 8 libraries use a new JSON writer, System.Text.Utf8JsonWriter, to serialise and deserialise the JSON payload. The new writer is significantly faster and needs less memory than the old one, JsonWriter, created by the Microsoft.OData.Json.DefaultJsonWriterFactory since it isn’t based on the TextWriter but on a Stream. While the new writer has been available since OData version 7.12.2, now it’s the default implementation in OData 8.

Developers can still use the old writer, if needed, by calling AddOData method in the service builder and providing an ODataJsonWriterFactory instance, which corresponds to the older DefaultJsonWriterFactory and was renamed for clarity.

builder.Services.AddControllers().AddOData(options => options.EnableQueryFeatures().AddRouteComponents(routePrefix: string.Empty, model: modelBuilder.GetEdmModel(), configureServices: (services) =>
{
    services.AddScoped  (sp => new Microsoft.OData.Json.ODataJsonWriterFactory());
}));

The new writer doesn’t serialise JSON the same way as the old one. It doesn’t encode all the higher-ASCII Unicode characters as the older writer did. For example, it won’t encode non-Latin symbols such as Greek letters as Unicode sequence of numbers. Instead, it will output the Unicode character itself. The old writer would encode virtually all non-ASCII characters as numbers, making the size of the payload bigger and the encoding process slower. The new JSON writer outputs uppercase Unicode characters instead of lowercase used by previous versions.

Another major change in ASP.NET Core OData 9 is how the dependency injection works. While the earlier versions of OData libraries would use non-standard IContainerBuilder to configure OData services, the updated library uses the same abstractions as .NET does, namely IServiceProvider.

builder.Services.AddControllers().AddOData(options => options.EnableQueryFeatures().AddRouteComponents(routePrefix: string.Empty, model: modelBuilder.GetEdmModel(), configureServices: (services) =>
{
    services.AddDefaultODataServices(odataVersion: Microsoft.OData.ODataVersion.V4, configureReaderAction: (messageReaderSettings) =>
    {
        // Relevant changes to the ODataMessageReaderSettings instance here
    }, configureWriterAction: (messageWriterSettings) =>
    {
        // Relevant changes to the ODataMessageWriterSettings instance here
    }, configureUriParserAction: (uriParserSettings) =>
    {
        // // Relevant changes to the ODataUriParserSettings instance here
    });
}));

There are some minor changes in the new OData libraries, like the removal of legacy implementations and old standards such as JSONP format. For a full list, developers can check the release notes of OData 8 .NET libraries.

The new ASP.NET Core OData 9 libraries are distributed as NuGet packages. The new release has been downloaded 150.000 times in the last 6 weeks. The source code of ASP.NET Core OData is available on GitHub and the repository currently has 458 open issues.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AQR Capital Management LLC Decreases Stake in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

AQR Capital Management LLC lowered its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 49.2% during the second quarter, according to the company in its most recent Form 13F filing with the SEC. The institutional investor owned 10,710 shares of the company’s stock after selling 10,375 shares during the quarter. AQR Capital Management LLC’s holdings in MongoDB were worth $2,668,000 as of its most recent filing with the SEC.

A number of other hedge funds and other institutional investors also recently made changes to their positions in MDB. Raleigh Capital Management Inc. lifted its holdings in MongoDB by 24.7% during the 4th quarter. Raleigh Capital Management Inc. now owns 182 shares of the company’s stock worth $74,000 after buying an additional 36 shares in the last quarter. Advisors Asset Management Inc. increased its position in MongoDB by 12.9% in the first quarter. Advisors Asset Management Inc. now owns 324 shares of the company’s stock worth $116,000 after purchasing an additional 37 shares during the period. Atria Investments Inc raised its holdings in MongoDB by 1.2% during the 1st quarter. Atria Investments Inc now owns 3,259 shares of the company’s stock worth $1,169,000 after purchasing an additional 39 shares during the last quarter. Taylor Frigon Capital Management LLC boosted its stake in shares of MongoDB by 0.4% in the 2nd quarter. Taylor Frigon Capital Management LLC now owns 9,903 shares of the company’s stock valued at $2,475,000 after purchasing an additional 42 shares during the last quarter. Finally, Fifth Third Bancorp raised its stake in shares of MongoDB by 7.6% during the second quarter. Fifth Third Bancorp now owns 620 shares of the company’s stock worth $155,000 after buying an additional 44 shares during the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Performance

Shares of MDB opened at $259.44 on Tuesday. The company has a current ratio of 5.03, a quick ratio of 5.03 and a debt-to-equity ratio of 0.84. MongoDB, Inc. has a 12 month low of $212.74 and a 12 month high of $509.62. The firm has a fifty day simple moving average of $262.12 and a 200 day simple moving average of $289.28. The stock has a market capitalization of $19.03 billion, a price-to-earnings ratio of -92.33 and a beta of 1.15.

MongoDB (NASDAQ:MDBGet Free Report) last released its earnings results on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.49 by $0.21. The business had revenue of $478.11 million for the quarter, compared to the consensus estimate of $465.03 million. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. MongoDB’s quarterly revenue was up 12.8% on a year-over-year basis. During the same quarter in the previous year, the business earned ($0.63) earnings per share. Research analysts forecast that MongoDB, Inc. will post -2.44 EPS for the current fiscal year.

Wall Street Analyst Weigh In

MDB has been the subject of a number of analyst reports. Oppenheimer lifted their target price on shares of MongoDB from $300.00 to $350.00 and gave the stock an “outperform” rating in a report on Friday, August 30th. Sanford C. Bernstein lifted their price objective on MongoDB from $358.00 to $360.00 and gave the stock an “outperform” rating in a research note on Friday, August 30th. Wells Fargo & Company increased their target price on MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a research note on Friday, August 30th. Stifel Nicolaus boosted their target price on MongoDB from $300.00 to $325.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Finally, Truist Financial increased their price target on shares of MongoDB from $300.00 to $320.00 and gave the company a “buy” rating in a research report on Friday, August 30th. One analyst has rated the stock with a sell rating, five have given a hold rating and twenty have assigned a buy rating to the stock. According to MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and an average price target of $337.56.

Get Our Latest Analysis on MongoDB

Insider Activity

In related news, CEO Dev Ittycheria sold 3,556 shares of the business’s stock in a transaction on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total transaction of $911,225.00. Following the transaction, the chief executive officer now directly owns 219,875 shares in the company, valued at approximately $56,342,968.75. The trade was a 0.00 % decrease in their ownership of the stock. The sale was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this hyperlink. In other news, Director Dwight A. Merriman sold 2,000 shares of MongoDB stock in a transaction that occurred on Friday, August 2nd. The stock was sold at an average price of $231.00, for a total transaction of $462,000.00. Following the completion of the transaction, the director now directly owns 1,140,006 shares in the company, valued at approximately $263,341,386. The trade was a 0.00 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which can be accessed through this link. Also, CEO Dev Ittycheria sold 3,556 shares of the stock in a transaction on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total value of $911,225.00. Following the completion of the sale, the chief executive officer now owns 219,875 shares in the company, valued at approximately $56,342,968.75. This trade represents a 0.00 % decrease in their position. The disclosure for this sale can be found here. In the last 90 days, insiders sold 15,896 shares of company stock valued at $4,187,260. 3.60% of the stock is owned by insiders.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The Best High-Yield Dividend Stocks for 2024 Cover

Looking to generate income with your stock portfolio? Use these ten stocks to generate a safe and reliable source of investment income.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Sasi Kiran Parasa Wins a 2024 Global Recognition Award for Pioneering AI Integration in …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Sasi Kiran Parasa, a Senior HRIS Engineer at MongoDB, received a 2024 Global Recognition Award for integrating AI into HR processes. His innovative work in Compensation Management, Performance management, and HR operations has significantly improved efficiency and set new industry standards for human capital management.

Sasi Kiran Parasa, an innovative HR technologist, has been honored with a 2024 Global Recognition Award for his groundbreaking integration of artificial intelligence into human resources processes. The award acknowledges Parasa’s significant contributions to HR technology, particularly his pioneering work using AI to enhance recruitment, performance management, and overall HR operations.

As a Senior HRIS Engineer at MongoDB, Parasa has utilized his extensive experience in Human Capital Management (HCM) to modernize HR practices. His expertise in SAP SuccessFactors and Workday, covering modules such as Employee Central, Performance and Goals Management, and Compensation Management, has led to notable improvements in organizational efficiency and employee engagement for many clients.

Innovative AI-Driven HR Solutions

Parasa’s most significant achievement is implementing AI-powered solutions in recruitment and HR processes. This innovative approach has considerably reduced time-to-hire metrics and improved the quality of new hires, marking a substantial advancement in HR technology.

“By integrating AI into our HR processes, we’ve streamlined operations and vastly improved the quality of our talent acquisition,” Parasa stated. “This technology allows us to make more informed, data-driven recruitment and performance management decisions.”

Parasa’s work includes developing intelligent job-matching algorithms, automated performance evaluation systems, and predictive analytics models within the SAP SuccessFactors platform. These tools have enabled organizations to optimize their HR processes and make more informed decisions, establishing new industry standards for efficiency and accuracy in human capital management.

Industry-Wide Impact

The influence of Parasa’s innovations extends beyond individual organizations. His demonstration of the concrete benefits of AI applications in talent acquisition and management has encouraged widespread adoption of these technologies across the HR sector. This effect has contributed to a broader evolution of HR practices, expanding the possibilities in human resource management through technology.

“The adoption of AI in HR is not just about efficiency; it’s about reimagining how we approach human capital management,” Parasa explained. “We’re seeing a shift towards more strategic, data-informed HR practices that have the potential to reshape entire organizations.”

Thought Leadership and Continuous Learning

Parasa’s impact goes beyond his technical innovations. As a thought leader in HR technology, he has shared valuable insights through speaking engagements, industry publications, and original research. His contributions have guided discussions around AI and digital transformation in HR, motivating professionals and organizations to embrace technological advancements.

This thought leadership is rooted in a commitment to continuous learning, as demonstrated by Parasa’s multiple SAP certifications, including Professional Application Consultant credentials in SuccessFactors Compensation and Variable Pay. His dedication to expanding his knowledge keeps him current with HCM technology trends and best practices.

“In this rapidly changing field, staying current is not just beneficial—it’s essential,” Parasa commented. “My goal is to continually connect theoretical possibilities with practical applications, providing real value to organizations aiming to modernize their HR practices.”

Future Outlook

As HR continues to evolve, professionals like Parasa play a crucial role in shaping its future. His ongoing efforts to innovate and educate promise to drive further advancements, ensuring that HR practices align with advanced technological capabilities.

“Looking ahead, I see significant potential for AI and machine learning to enhance HR further,” Parasa shared. “From predictive analytics in workforce planning to AI-driven personalized employee development programs, the possibilities are fascinating.”

Recognition and Impact

A 2024 Global Recognition Award celebrates Sasi Kiran Parasa’s exceptional contributions to HR technology and digital transformation. His combination of technical expertise, innovative thinking, and practical implementation skills has made him a leader in AI integration within HR processes.

Alex Sterling from the Global Recognition Awards commented, “Sasi Kiran Parasa’s work shows the beneficial effects of technology in HR. His innovations improve organizational efficiency and enhance employee experiences, demonstrating the significant impact of technology in human capital management. Parasa’s achievements set a high standard for excellence in the field.”

This award recognizes Parasa’s past achievements and his potential for continued influence on the future of work and human resource management. His career inspires HR professionals worldwide and shows the potential of embracing technology in this people-focused field.

About Global Recognition Awards

Global Recognition Awards is an international organization that recognizes exceptional companies and individuals who have contributed significantly to their industry.

Contact Info:
Name: Alex Sterling
Email: Send Email
Organization: Global Recognition Awards
Website: https://globalrecognitionawards.org

Release ID: 89143161

Should you come across any errors, concerns, or inconsistencies within this press release’s content, we urge you to reach out without delay by contacting error@releasecontact.com (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). Our committed team will promptly address your feedback within 8 hours and take appropriate measures to resolve any identified issues or guide you through the removal process. Providing accurate and dependable information remains our utmost priority.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: OpenJDK JEPs, Plans for Spring 7.0, JobRunr 7.3, Keycloak 26.0, Debezium 3.0

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for September 30th, 2024 features news highlighting: new OpenJDK JEPs and those targeted for JDK 24; plans for Spring Framework 7.0; JobRunr 7.3.0, Keycloak 26.0.0 and Debezium 3.0.0.

OpenJDK

After its review had concluded, JEP 475, Late Barrier Expansion for G1, was promoted from Proposed to Target to Targeted for JDK 24. This JEP proposes to simplify the implementation of the G1 garbage collector’s barriers, which record information about application memory accesses, by shifting their expansion from early in the C2 JIT’s compilation pipeline to later. The goal is to reduce the execution time of C2 when using the G1 collector.

Two days after having been elevated from its JEP Draft 8340841 to Candidate status, JEP 489, Vector API (Ninth Incubator), has been promoted from Candidate to Proposed to Target for JDK 24. This JEP incorporates enhancements in response to feedback from the previous eight rounds of incubation, namely: JEP 469, Vector API (Eighth Incubator), delivered in JDK 23; JEP 460, Vector API (Seventh Incubator), delivered in JDK 22; JEP 448, Vector API (Sixth Incubator), delivered in JDK 21; JEP 438, Vector API (Fifth Incubator), delivered in JDK 20; JEP 426, Vector API (Fourth Incubator), delivered in JDK 19; JEP 417, Vector API (Third Incubator), delivered in JDK 18; JEP 414, Vector API (Second Incubator), delivered in JDK 17; and JEP 338, Vector API (Incubator), delivered as an incubator module in JDK 16. Originally slated to be a re-incubation by reusing the original Incubator status, it was decided to keep enumerating. The Vector API will continue to incubate until the necessary features of Project Valhalla become available as preview features. At that time, the Vector API team will adapt the Vector API and its implementation to use them, and will promote the Vector API from Incubation to Preview. The review is expected to conclude on October 9, 2024.

JEP 484, Class-File API, has been promoted from Candidate to Proposed to Target for JDK 24. This JEP proposes to finalize this feature in JDK 24 after two rounds of preview, namely: JEP 466, Class-File API (Second Preview), delivered in JDK 23; and JEP 457, Class-File API (Preview), delivered in JDK 22. This feature provides an API for parsing, generating, and transforming Java class files. This will initially serve as an internal replacement for ASM, the Java bytecode manipulation and analysis framework, in the JDK with plans to have it opened as a public API. Goetz has characterized ASM as “an old codebase with plenty of legacy baggage” and provided background information on how this feature will evolve and ultimately replace ASM. The review is expected to conclude on October 8, 2024.

JEP 492, Flexible Constructor Bodies (Third Preview), has been promoted from its JEP Draft 8338287 to Candidate status. This JEP proposes a third round of preview, with minimal change to gain additional experience and feedback from the previous two round, namely: 482, Flexible Constructor Bodies (Second Preview), delivered in JDK 23; and JEP 447, Statements before super(…) (Preview), delivered in JDK 22. This feature allows statements that do not reference an instance being created to appear before the this() or super() calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Changes in this JEP include: a treatment of local classes; and a relaxation of the restriction that fields can not be accessed before an explicit constructor invocation to a requirement that fields can not be read before an explicit constructor invocation. Gavin Bierman, Consulting Member of Technical Staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback.

JEP 491, Synchronize Virtual Threads without Pinning, has been promoted from its JEP Draft 8337395 to Candidate status. This JEP proposes to:

Improve the scalability of Java code that uses synchronized methods and statements by arranging for virtual threads that block in such constructs to release their underlying platform threads for use by other virtual threads. This will eliminate nearly all cases of virtual threads being pinned to platform threads, which severely restricts the number of virtual threads available to handle an application’s workload.

JEP 488, Primitive Types in Patterns, instanceof, and switch (Second Preview), has been promoted from its JEP Draft 8335876 to Candidate status. This JEP, under the auspices of Project Amber, proposes a second round of preview, without change, to gain additional experience and feedback from the previous round or preview, namely: JEP 455, Primitive Types in Patterns, instanceof, and switch (Preview), delivered in JDK 23. This feature enhances pattern matching by allowing primitive type patterns in all pattern contexts, and extending instanceof and switch to work with all primitive types. More details may be found in this draft specification by Aggelos Biboudis, Principal Member of Technical Staff at Oracle.

JEP 487, Scoped Values (Fourth Preview), has been promoted from its JEP Draft 8338456 to Candidate status. This JEP proposes a fourth preview, with one change, in order to gain additional experience and feedback from one round of incubation and three rounds of preview, namely: JEP 481, Scoped Values (Third Preview), delivered in JDK 23; JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. Formerly known as Extent-Local Variables (Incubator), this feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads.

JEP 14, The Tip & Tail Model of Library Development, a new informational JEP, memorializes the “tip and tail” release model as practiced with OpenJDK releases since 2018. The “tip” refers to the six-month OpenJDK release cadence, while the “tail” refers to the quarterly critical patch updates with releases in January, April, July and October. The intent is to minimize backports from “tip” to “tail” for improved maintenance. Maintainers of Java libraries are encouraged to practice the “tip and tail” release model.

JDK 24

Build 18 of the JDK 24 early-access builds was made available this past week featuring updates from Build 17 that include fixes for various issues. Further details on this release may be found in the release notes.

For JDK 24, developers are encouraged to report bugs via the Java Bug Database.

GlassFish

GlassFish 7.0.18, the eighteenth maintenance release, delivers bug fixes, improvements in documentation and new features such as: the ability to launch GlassFish Embedded Server from the command-line; and the ability to dynamically update the common classloader, via the new GlassfishUrlClassLoader class, if a common library is added without the need to restart the server. Further details on this release may be found in the release notes.

GraalVM

Fabio Niephaus, Research Manager on the GraalVM team at Oracle Labs, announced the release of a new GitHub repository complete with new demos and guides for the GraalPy, GraalJS, and GraalWasm projects. This repository coincides with the recent release of GraalVM for JDK 23 in which GraalPy and GraalWasm were elevated to stable and suitable for production workloads.

TornadoVM

TornadoVM 1.0.8, the eighth maintenance release provides bug fixes and improvements such as: new methods, printTraceExecutionPlan() and getTraceExecutionPlan(), defined in the TornadoExecutionPlan sealed class to log and dump operations enabled for the TornadoVM execution plan; a removal of the getHandleByIndex(), defined in the PowerMetric interface as it performed inner low-level functionality that wasn’t necessary to be exposed in a device context; and minor improvements in benchmarking. More details on this release may be found in the release notes.

Spring Framework

With the upcoming release of Spring Framework 6.2 in November 2024, the Spring team has already prepared plans for Spring Framework 7.0 release in November 2025. As stated:

We will upgrade our baseline to Jakarta EE 11 (Tomcat 11, Hibernate ORM 7, Hibernate Validator 9) and embrace the upcoming JDK 25 LTS, while retaining a JDK 17 baseline in alignment with the wider Java ecosystem. For Kotlin applications, we intend to base Spring Framework 7’s support on Kotlin 2.

The team also plans to implement a null-safety strategy using the JSpecify annotations. Further details on JSpecify may be found in this InfoQ news story.

Helidon

Version 4.1.2 of Helidon delivers dependency upgrades and notable changes such as: an improved implementation of server-side events (SSE) in the WebServer component; and the addition of a missing validation from within the findNewLine() method, defined in the DataReader class, to keep the internal index within bounds. Further details on this release may be found in the changelog.

Similarly, version 3.2.10 of Helidon 3.2.10 also provides dependency upgrades and notable changes such as: the ability to close HTTP connections after five-minutes of idle time has elapsed; and replace the use of server configuration in the WebServer component in favor of socket configuration. More details on this release may be found in the changelog.

JobRunr

Version 7.3.0 of JobRunr and JobRunr Pro have been released with new features such as: full compatibility with Quarkus 3.15 and Kotlin 2.0.20; improved thread safety incorporated into the BackgroundJobServer class; and the ability to set the serverTimeoutPollIntervalMultiplicand attribute, defined in the BackgroundJobServerConfiguration class, via properties. Further details on this release may be found in the release notes.

Hibernate

The release of Hibernate Reactive 2.4.2.Final ships with resolutions to issues: a ClassCastException when retrieving a composite table entity annotated with the Jakarta Persistence @IdClass; and embeddable JSON values not being mapped correctly in the database. More details on this release may be found in the release notes.

Micrometer

Version 1.13.5 of Micrometer Metrics features dependency upgrades and a resolution to a ConcurrentModificationException due to a Keycloak server instance unable to start and allowing the configuration of a MeterFilter interface after meters have already been registered. Further details on this release may be found in the release notes.

Grails

The release of Grails 6.2.1 provides bug fixes, dependency upgrades and notable changes such as: compatibility with Groovy 3.0.22; and the ability to customize classes in cascading style sheets for the ErrorsViewStackTracePrinter class. More details on this release may be found in the release notes.

Infinispan

Versions 15.1.0.Dev04 and 15.0.10.Final of Infinispan ship with dependency upgrades and notable changes that include enhancements to Redis Serialization Protocol (RESP) operations that: ensure that serialization consistently returns a response in RESP3 format; and the elimination of a deadlock due to the get() method, defined in the SimpleCacheImpl class, that handles expiration in memory and returns an instance of a CompletableFuture class. Further details on these releases may be found in the release notes for version 15.1.0.Dev04 and version 15.0.10.Final.

Keycloak

Keycloak 26.0.0 has been released with new features such as: release cycles for their dedicated libraries (Java admin client, Java authorization client, Java policy enforcer) that are independent of the Keycloak release cycle; user sessions that are now persisted by default; a new default login theme; and a preview of tracing with OpenTelemetry. More details on this release may be found in the release notes. InfoQ will follow up with a more detailed news story.

Testcontainers for Java

Testcontainers for Java 1.20.2 delivers bug fixes, improvements in documentation, dependency upgrades and new features/enhancements such as: an implementation of MongoDB Atlas; new modules to support Databend and Timeplus; and an enhancement to the Cassandra module that eliminates runtime exceptions. Further details on this release may be found in the release notes.

RefactorFirst

Jim Bethancourt, principal software consultant at Improving, an IT services firm offering training, consulting, recruiting, and project services, has released versions 0.6.1 and 0.6.0 of RefactorFirst, a utility that prioritizes the parts of an application that should be refactored. These releases deliver: a change to only calculate the average commit count of classes in a cycle when the showDetails property is set to true due to a strong correlation with the number of classes involved in a cycle; and the removal of the JGraphT modules, jgrapht-core and jgapht-ext, from the report module due to a transitive dependency on the JGraphX Swing Component and resolves CVE-2017-18197, a vulnerability in which the mxGraphViewImageReader class is susceptible to XML External Entity (XXE) attacks. More details on this release may be found in the release notes for version 0.6.1 and version 0.6.0.

Debezium

Almost two years since the release of Debezium 2.0.0.Final, an open-source distributed platform for change data capture (CDC), the release of Debezium 3.0.0.Final features: JDK 17 as a minimal version; support for Kafka 3.8.0; removal of the deprecated incremental signal field, additional-condition; and the addition of detailed metrics per database table. There were also updates to the connectors for all supported databases. Further details on this release may be found in the release notes. InfoQ will follow up with a more detailed news story.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB 8 Reduces Memory Use And Increases Speed – I Programmer

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDb 8 has been released, and the developers have said this is the most secure, durable, available, and performant version of MongoDB yet, with significantly reduced memory usage and query times, and more efficient batch processing.

MongoDB is a NoSQL document database that stores its documents in a JSON-like format with schema. MongoDB Atlas is the fully-managed cloud database from the MongoDB team.

mongodblogo

The performance improvements mean the MongoDB team has seen 36% better read throughput and 56% faster bulk writes. They have also shown 20% faster concurrent writes during data replication, and 200% faster performance on complex aggregations of times series data. Writing on the MongoDB blog, Jim Scharf, CTO at MongoDB, said:

“In making these improvements, we’re seeing benchmarks for typical web applications perform 32% better overall.”

Alongside the performance improvements, MongoDB 8.0 does better at sharding, distributing data across shards up to 50 times faster, with reduced need for additional configuration or setup.

There’s also better support for search and AI applications through the option of quantized vectors. These are representations of full-fidelity vectors that require up to 96% less memory and are faster to retrieve, but remain accurate.

Data encryption is another area to have been improved with an expansion of MongoDB’s Queryable Encryption to also support range queries. This improvement has been developed by MongoDB’s Cryptography Research Group. It means customers can encrypt sensitive application data, store it securely as fully randomized encrypted data in the MongoDB database, and run expressive queries on the encrypted data without needing to be experts in cryptography.

Queryable Encryption lets applications encrypt sensitive fields in documents so that they remain encrypted even while the server processes them. There’s a whitepaper going into more details on the MongoDB website.

Another area to have been improved is in the provision of greater control when there are unpredictable spikes in usage and sustained periods of high demand. Administrators can now set a default maximum time limit for running queries. They can also set MongoDB to reject recurring types of problematic queries, and set query settings to persist through events like database restarts.

MongoDB 8.0 is available now via MongoDB Atlas, as part of MongoDB Enterprise Advanced for on-premises and hybrid deployments, and as a free download with MongoDB Community Edition.

mongodblogo

More Information

MongoDB Website

Queryable Encryption Whitepaper

Related Articles

MongoDB Adds Vector Search

MongoDB 7 Adds Queryable Encryption

MongoDB 6 Adds Encrypted Query Support

MongoDB 5 Adds Live Resharding

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner

kotlin book

Comments

or email your comment to: comments@i-programmer.info

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.