IgniteTech Joins Google Cloud, MongoDB, and Snowflake as a Contributing Sponsor for …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

SAN FRANCISCO–(BUSINESS WIRE)–Sep 10, 2024–

IgniteTech, the enterprise software company that recently transformed into an AI innovation startup, announced its sponsorship of The AI Conference industry event, which will be held from September 10 to 11, 2024, in San Francisco, California.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


IgniteTech Joins Google Cloud, MongoDB, and Snowflake as a Contributing Sponsor for …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

SAN FRANCISCO–(BUSINESS WIRE)–Sep 10, 2024–

IgniteTech, the enterprise software company that recently transformed into an AI innovation startup, announced its sponsorship of The AI Conference industry event, which will be held from September 10 to 11, 2024, in San Francisco, California.

This page requires Javascript.

Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


IgniteTech Joins Google Cloud, MongoDB, and Snowflake as a Contributing Sponsor for …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

CEO Eric Vaughan takes the stage at The AI Conference, showcasing IgniteTech’s patent-pending AI innovation, which multiplies people’s availability

SAN FRANCISCO, September 10, 2024–(BUSINESS WIRE)–IgniteTech, the enterprise software company that recently transformed into an AI innovation startup, announced its sponsorship of The AI Conference industry event, which will be held from September 10 to 11, 2024, in San Francisco, California.

“I’m excited to share MyPersonas™ at The AI Conference, a solution that creates digital twins of our customers’ experts, reducing the repetitive inquiries they receive and allowing them to focus on more strategic tasks,” said Eric Vaughan, CEO of IgniteTech. “Beyond traditional AI technology, which offers avatars or agents, we’ve also enabled real-time access to the human behind the AI through our mobile application, obtaining their knowledge when needed to fill the gaps in the MyPersonas session, bringing humans and AI together uniquely.”

In his presentation on Wednesday, September 11, at 2:30 p.m., Vaughan will highlight the exciting potential of humans and AI and introduce a new approach where digital “clones” enhance human skills instead of overshadowing them. The revolutionary concept of MyPersonas involves creating AI-powered digital twins of your organization’s experts to break down information barriers and provide always-on and available access to critical knowledge. By staying connected to their human counterparts, MyPersonas offers a unique blend of AI and human capabilities, reducing the time SMEs are needed for repetitive answers and freeing up their time for more significant actions. No more repetitive “pings” for the same questions, however necessary, draining time out of their day.

MyPersonas is a brand new software solution leveraging state-of-the-art AI technology created by IgniteTech’s AI Innovation team, the company formed starting in 2023. The product, which was first introduced as Jive Personas to power the Company’s Jive product before pivoting to a solution that integrates with any product’s knowledge base through its API, will be in the customer’s hands at the end of Q3 2024 and includes groundbreaking, patent-pending ideas. For more information, visit MyPersonas.ai.

ABOUT IGNITETECH

Founded in 2010, IgniteTech is a leading AI-first enterprise software company. With a track record of successful acquisitions and rapid innovation, IgniteTech’s solutions power businesses worldwide. The recent announcements of AI product visions and enhancements across its entire portfolio highlight IgniteTech’s commitment to transforming its offerings with AI-centric innovative solutions.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240910042266/en/

Contacts

For Media Inquiries: meridith@mk-bridge.com or teena@teenatouch.com
For Business Inquiries: software@ignitetech.com
Follow: LinkedIn / X

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Infosys partners with Proximus Group; Mphasis opens innovation hub in London | YourStory

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Infosys partners with Proximus Group of Belgium

Infosys and Proximus Group, a Belgium-based digital services and communication solutions provider, have partnered to unlock new business opportunities.

The new strategic collaboration will focus on a joint go-to-market approach that will use the products of Proximus’ International affiliates, including Route Mobile’s Communications Platform-as-a-Service (CPaaS) and Telesign’s Digital Identity (DI) solutions.

This, combined with Infosys digital services, is expected to drive innovation in omnichannel customer engagement and AI-driven digital assistants for their customers. The collaboration will enhance digital security by providing robust DI and fraud protection solutions, ensuring trusted communication online.

Mphasis opens deep tech innovation hub in London

Technology services company Mphasis has opened its Innovation Hub in London. The facility will serve as a Centre of Excellence (CoE) for developing quantum computing, quantum cryptography, and artificial intelligence (AI) solutions to address industry challenges, including algorithmic underwriting, catastrophic risk modelling, and fraud detection.

Mphasis has steadily expanded its UK presence over the past years and aims to double the headcount over the next three years through its London innovation hub.

MongoDB for Academia programme expands reach in India

MongoDB has expanded MongoDB for Academia in India, including a new partnership with the All India Council for Technical Education (AICTE).

The AICTE partnership will be supported by SmartBridge’s SmartInternz learning platform to give more than 150,000 Indian students access to virtual internships and gain the skills required to use MongoDB Atlas, a multi-cloud developer data platform.

As part of the programme’s expansion, MongoDB also partnered with GeeksforGeeks, a platform for computer science resources in India, making the MongoDB Developer Learning Path available to all of GeeksforGeeks’ 25 million registered users.

Launched in September 2023, the MongoDB for Academia in India programme provides student training, curriculum resources for educators, credits to use MongoDB technology at no cost, and certifications to help individuals start careers in the technology industry.

L&T Semiconductor forms partnership with IBM

L&T Semiconductor Technologies has entered into a research and development collaboration with IBM to design advanced processors. The scope of this work could include processor design for edge devices and hybrid cloud systems, as well as for areas like mobility, industrial, energy, and servers.

Under this collaboration, IBM and L&T Semiconductor anticipate focusing on innovation, functionality, and performance to enable reliable, secure, and scalable computing for a range of applications. This work would be supportive of India’s ambition to create globally competitive semiconductor technologies.

Previously, IBM signed a MoU with the Centre for Development of Advanced Computing (C-DAC), an autonomous scientific society of MeitY, to collaborate on the creation of a joint working group to accelerate processor design and manufacturing for High-Performance Computing (HPC) in India. 

ESG Data & Solutions unveils new AI research platform

ESG Data & Solutions launched ESGSure, an AI-powered research platform designed to assist environmental, social, and governance (ESG) research. With its ability to deliver scalable, rigorous, and evidence-based insights, ESGSure is aimed at financial institutions, investors, and corporations worldwide.

According to a statement, ESGSure addresses key ESG adoption challenges, including the shortage of trained professionals, high costs, and time-consuming processes. It provides access to ESG data on over 15,000 global companies, including Real Estate Investment Trusts (REITs) and private firms, enabling faster evaluation of compliance with global regulations.

The platform enables objective, up-to-date analysis by continuously incorporating updates from domain experts, thus reducing AI bias. The platform also complies with GDPR and global data protection regulations, addressing concerns around privacy and security.

HDFC Bank partners with Juspay to launch Smart Gateway

HDFC Bank has launched Smart Gateway in collaboration with Juspay to empower online businesses. The new payment solution is designed to streamline transactions, enhance customer experiences, and drive business growth.

According to a statement, Smart Gateway simplifies payment management by unifying various payment methods. It has the feature of vernacular checkouts along with robust fraud detection mechanisms. Built to handle heavy loads with ease, the platform offers PCI-DSS compliance, 24/7 risk monitoring, and advanced fraud detection.

Recognize invests in Blue Mantis

Recognize, a private equity firm focused on investing in technology services businesses with a thematic and operationally-oriented strategy, said it has made a significant majority investment in Blue Mantis, an IT services provider with expertise in managed services, cybersecurity, and cloud solutions.

Abry Partners, which has backed the company since 2020, will continue as a minority investor in the company, it said in a statement.

Blue Mantis services mid-market and enterprise clients with a North American focus across multiple vertical markets, including business services, healthcare, financial services, manufacturing, and the public sector.

Headquartered in Portsmouth, New Hampshire, Blue Mantis also has a global delivery centre in Bengaluru, India, and Toronto, Canada, with delivery capabilities.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


On Mission to Upskill 500,000 Students, MongoDB Partners With Ministry of Education’s All …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB for Academia also working with learning platforms SmartInternz and GeeksforGeeks to equip Indian students with the knowledge needed to build modern, AI-powered applications

 

MongoDB, Inc. (NASDAQ: MDB) today announced the expansion of MongoDB for Academia in India, including a new partnership with All India Council for Technical Education (AICTE), Ministry of Education, Government of India. The AICTE partnership will be supported by SmartBridge’s SmartInternz learning platform to give more than 150,000 Indian students access to virtual internships and to gain the skills required to use MongoDB Atlas—the leading multi-cloud developer data platform.

As part of the program’s expansion, MongoDB also announced a new partnership with GeeksforGeeks, a platform for computer science resources in India. The collaboration will make the MongoDB Developer Learning Path available to all of GeeksforGeeks’ 25 million registered users.

Launched in September 2023, the MongoDB for Academia in India program provides student training, curriculum resources for educators, credits to use MongoDB technology at no cost, and certifications to help individuals start careers in the technology industry. The skills and training provided through the program are particularly important, as many Indian organizations struggle to find developers who have the skills to build modern applications and take advantage of emerging technologies like generative AI. According to a report from the National Association of Software and Service Companies, India’s technology sector will demand more than one million engineers with advanced skills in artificial intelligence and other capabilities over the next three years. Overall, the industry body expects there will be a need for around six million digital roles by 2028—and the available talent pool is forecast to be 4.7 million workers. This gap underscores the need for increased collaboration between industry and academia to upskill students and educators in India to meet the demands of the country’s large and growing economy.

“In India, we have a massive opportunity with the current wave of AI and modern technologies that will transform our lives and economy in the coming years. But to take advantage of that opportunity, it is vital our developers have the right skills. We’re excited to partner with MongoDB to help make that possible,” said Dr. Buddha Chandrasekhar CEO, Anuvadini AI, Ministry of Education and Chief Coordinating Officer, AICTE, Ministry of Education, Government of India.

To address this challenge, MongoDB for Academia is partnering with the All Council for Technical Education (AICTE), the Indian government’s authority for the management of technical education, and the edtech platform SmartBridge to launch a virtual internship program through the SmartInternz platform. Aligned with the government’s Skill India Initiative, the program aims to provide full-stack development skills to over 150,000 students. Each internship will include 60 hours of experiential learning, hands-on bootcamps, courses, and project work, as well as simulated corporate environments where students can apply their learned skills, collaborate with peers, and receive mentorship.

“We’ve seen great appetite and interest on our platform for modern database technologies like MongoDB. We want to equip students with knowledge of in-demand technologies so they have skills they need to become the job-ready candidates India’s organizations are looking for,” said Amarender Katkam, Founder and CEO at SmartBridge and SmartInternz.

In the past year, the MongoDB for Academia program has made major strides toward its goal of upskilling 500,000 students. To date, more than 200 partnerships with educational institutions have been established, as well as collaborations with other government and private organizations. Hundreds of educators have been onboarded on to the MongoDB for Academia program, more than 100,000 students have received skills training, and over 450,000 hours of learning have been completed.

“India loves developers and so does MongoDB. I’m so proud of the work our MongoDB for Academia team is doing to empower Indian developers and to support the next generation of tech talent in this country,” said Sachin Chawla, Area Vice President, India at MongoDB.

MongoDB for Academia is also expanding to partner with GeeksforGeeks which will see the organizations collaborate on a number of new projects, including the syndication of key full-stack development courses to learners both in online and offline GeeksforGeeks centers across India. The MongoDB Developer Learning path will also become available to all GeeksforGeeks users, and is expected to reach more than 100,000 aspiring developers.

To learn more about MongoDB for Academia, visit mongodb.com/academia.

MongoDB Developer Data Platform

MongoDB Atlas is the leading multi-cloud developer data platform that accelerates and simplifies building with data. MongoDB Atlas provides an integrated set of data and application services in a unified environment to enable developer teams to quickly build with the capabilities, performance, and scale modern applications require.

About MongoDB

Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, our developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Not Just Memory Safety: How Rust Helps Maintain Efficient Software

MMS Founder
MMS Pietro Albini

Article originally posted on InfoQ. Visit InfoQ

Transcript

Albini: The Rust programming language has been on a meteoric rise lately. There are good reasons for that. Rust started at Mozilla research as a small research project to see if a new language could help improve how Firefox is developed, could help improve the performance of Firefox. Today, 9 years since the first stable release of Rust, we have a large project of 300 people, both volunteers and paid for by companies, to contribute to Rust, to improve Rust.

We have Rust being adopted everywhere. We have the worldwide developer community liking Rust, wanting to use Rust. In the Stack Overflow Developer Survey every year, for the past 8 consecutive years, Rust has been marked as the most loved language. The main reason for Rust to exist is memory safety. Because low-level programming languages like C or C++ are plagued by memory safety issues, things like buffer overflows, use-after-frees, segmentation faults, null pointer dereferencing.

All of those problems are things that Rust is designed to prevent, and that with Rust, you cannot introduce in your program because your program is sub-compiling. Microsoft did some research on the vulnerabilities in their products, and around 70% of them were security vulnerabilities due to memory safety, were things that Rust would prevent. You just completely eliminate 70% of your vulnerabilities if you use in a low-level programming language. Should you care about that? Because if you’re at this conference, you’re already probably using a memory safe language. Because all high-level programming languages like Java, Python, JavaScript, Go, Ruby, all of those are already memory safe. They are not plagued by the issues present in C or C++. Why should you care about Rust? Rust is not just memory safety, Rust also enables efficient and maintainable code.

It can help you squeeze every single bit of performance, while still maintaining all of the tooling and developer experience that you know and love. There was a very recent example of UV, which was our implementation of the Python package manager within Rust. They run some benchmarks on it, and it was between 10x and 115x faster than pip. This is a best-case scenario. It’s not like every single thing you write in Rust, you’re going to get one or the next performance. Still, Rust empowered this team to create such performant software. The Home Assistant project, which is a Python project to manage IoT devices in your home, saves 215 hours of CI time every month, just for switching package manager to UV, just for the efficiency of Rust. For a single project, we save that much compute, which is staggering.

Background

In the talk I want to cover, how can you leverage Rust? What are the parts of Rust that could be interesting for your project, for your company? What are the reasons why you should adopt it? I am Pietro. I am a longtime member of the Rust project. Nowadays I’m active in the Rust project infrastructure, release, and security response to make sure that Rust gets delivered to you. Before, I was the lead of the Rust infrastructure team and I served on the Rust Core team for 2 years. At my day job, I’m also a technical lead of Ferrous, a distribution of Rust for safety critical software like automotive or aerospace.

Why Rust?

I want to start with the elephant in the room that Rust is not an easy language to learn. If you look at Rust, you’ll see everywhere online that people say it’s hard to learn. That is true, because Rust forces you to use a different programming model. One of the core pillars of Rust, the reason why Rust can ensure memory safety is the concept of single ownership of data. That an object, a value can only be owned by a single function at a given point in time. You cannot have the ownership spread between multiple parts of your code.

Then you can lend out references if you want other parts of your code to access them. The problem is, most programming languages don’t enforce that. If you come to Rust, you’re going to hit a big wall of having to internalize this new model, having to think again on how you architect your software. Once you do that, then Rust will click, then you will be able to productively use Rust, and take advantage of all of the good things about Rust without slowing down your team. Google who is a large user of Rust and has been reverting more of its external services in Rust, ran an internal survey recently.

This was announced by the director of engineering of Google Android at a conference, that Rust teams at Google are as productive as the ones using high-level programming languages like Go, and more than twice as productive as teams using C++. You can get all of these after you learn Rust. You can get all of these without slowing down your team.

Also, there is another reason why I think Rust is hard to learn. This is something that every person trying Rust, me included, is guilty of, that we all make Rust harder to learn for ourselves, because Rust allows you to squeeze every bit of performance. Rust offers all of the tools to create a reliable and efficient software. Doing that requires some of the parts of Rust that are harder to learn. If you want to learn Rust, what I can recommend is, don’t start writing the most efficient algorithm possible even though Rust tempts you to do that. Start writing normal code.

Then as you get more familiar with Rust, as you get familiar with the concept that allows you to write efficient software, then you can dive into it. Don’t prematurely optimize, making Rust harder to learn for yourself. Also, you’re not alone when learning Rust. There are of course helpful communities online that can help you. There are a lot of resources both freely available and books you can learn on. There are commercial training providers upskilling your team. There is not just that, because with Rust, every one of us has a pair programmer, the compiler.

This is not an exaggeration, because working with the Rust compiler feels like having a senior developer next to you. Because Rust puts so much focus on good and actionable compiler errors. Rust is pushing the industry forward in what good error messages are. We are seeing now other compilers taking inspiration from Rust, working on their error messages. For us, we care about them so much that we have people in the Rust team just focused on improving error messages. If you find an error message that is confusing, that is not as great as it could have been, that is a compiler bug. A bug you should report and that the Rust project takes seriously and fixes.

Let’s see an error message. This is an error message related to ownership, so the concept of having a single owner. What happened here is that you try to use the data in multiple places. We can see that Rust points out what the actual problem is, so that we move to the variable data in the print function, even though it was already moved somewhere else before. It points out where it was moved before, so you can go there and figure out how to refactor it to not move it. It points out where the data is defined, which type it has, so if you need to refactor how the type works, you can go ahead and know where to do that. Also, the compiler knows that that type can be cloned. You can duplicate that to create a new copy of it to pass along. Since the compiler knows it, it gives you the suggestions on, you can clone the data if the performance impact is negligible for you.

The Rust project is so confident in its compiler errors that we even allow the compiler to automatically fix some of them. You can invoke the compiler with a flag, where for the errors we are the most confident about, it just tweaks your source code to fix the error for you, making warnings and errors disappear. Rust also ships with Cargo, the official build system that allows you to build your project and fetch and build dependencies, which is something that high-level programmers are probably accustomed to, but is a first if you’re having a unified tool used by everyone if you come from a background like C or C++. Cargo can fetch dependencies from your internal registry for your company, or for the public ecosystem with the crates.io package registry, where anyone can publish a new dependency and just add a line to your Cargo.toml to fetch it and include it in your program.

Rust also ships with all of the tools you would expect, a static analyzer called Clippy, the rustfmt code formatter, if you want to ensure a consistent style across your project. Top notch IDE integration with rust-analyzer that can hook into multiple editors, and the rustup version manager, which allows you to install Rust, upgrade Rust, and use different versions of Rust across multiple projects.

Rust In Action – Using Macros

Let’s now see Rust in action. Let’s see some ways that Rust can make you write more efficient, performant, and maintainable code. I’m going to focus on two things. One is how to use macros in the type system to make your code more robust without sacrificing efficiency. The other is how can you leverage concurrency to actually go ahead and increase the efficiency and performance of your code. Let’s start with the first one. In Rust, Rust has macros, which is a feature that you can use to generate repetitive code. Maybe you can create a macro to declare two different functions from a template or write a usable pattern to reduce duplication. Rust has actually two different kinds of macros, what we call declarative macros, which are defined inlining the file, and are similar to preprocessor macros you might find in other languages.

It has procedural macros, which are external code generator tools that can hook into the language itself. I want to focus on procedural macros, and in particular about derived macros, which are a way to generate complex default implementations for your Rust traits. Rust traits are practically the same as interfaces, or type classes you might find in other programming languages. For some traits, you might have to write a custom implementation for every type because the behavior changes between objects. For some of them, there is a default implementation that makes sense for most types that you then can override. While for very simple default cases, you can just write a default method, for others, you might need to look into the type, generate a complex custom default, depending on the shape of your data.

Let’s look at an example, the clone derive. Because Rust has the clone trait, which allows you to duplicate the object, we saw it before in that error message I showed you. The implementation of clone, if you think about it, is fairly simple. You just take all of the fields in your struct, and you clone each of them recursively. This will give you a full clone, a full copy of your data. If you want to implement that in Rust, you can just put the derive clone attribute at the top of your type, and that’s it. The compiler will invoke the code generator. It will generate the best implementation of clone for your type.

Then you can just invoke the clone method. This is done without runtime reflection. Also, because Rust doesn’t have it, so you cannot use it anyway. This means that the macro looks at your type, and generates the best code specific to what you’re doing. If you look at what the macro actually generated behind the scenes, we can see that it implemented the clone trait for the type. Inside of it, it defines the clone method. For every single field, we just duplicate it. This is exactly the code you would write yourself if you were to implement clone.

This is what Rust calls zero-cost abstractions, which doesn’t mean abstractions that are free, abstractions that don’t have any performance impact. What it means is that those abstractions are as fast, as performant, as efficient as code you will manually write yourself, you will manually optimize for that type. This is so powerful because it allows you to use abstractions to simplify your code without making any sacrifice in efficiency.

Clone is a built-in derive. It’s available in your code. It’s available in every Rust toolchain. You don’t need to do anything to use it. You can also write your own. You can write custom derives for your project if they make sense depending on your code structure, or you can fetch derives from the third-party ecosystem. The most popular one is Serde. Serde is a generic serialization and deserialization framework for Rust. It’s so ergonomic that it’s basically the default. Most Rust libraries support Serde. It’s what everyone recommends, except in very niche cases where it doesn’t fully fit. With Serde, you can just start derive serialize and derive deserialize at the top of your type, and you created a fully optimized serialization and deserialization code.

This doesn’t do any intermediate step like putting the data into a dynamic map accessed by strings, which has a runtime cost, and a correctness cost because you need to ensure you’re accessing the right strings. The implementation generated by Serde just translates the data in the format you want, like JSON in this example, into the structs you have, into the type the representation of it. This is a macro, so it doesn’t use reflection. It’s all code generated at compile time, that then the compiler fully optimized for your type. This is a zero-cost abstraction. It allows you to have a full deserializer, and a full serializer, as performant as one you’ll hand-write yourself, but without all of the maintainability nightmares of manually implementing serializers for every single type of yours.

How do you represent different variants of data? In that example, we might want to have two different kinds of data depending on the type of message we were receiving in our application, for example. In Rust, you can do that with Enums. You might say, but Enums are types that have multiple variants, you can have data in them. That is correct in most programming languages, but in Rust, Enum variants can have data attached to them. This is called algebraic data types or sum types in programming language field. It’s not an innovation of Rust. You have them in a lot of functional programming languages that are more niche, or even popular ones like Scala or Haskell. Enums allow you to attach data of different types, attach different variants of data depending on the variant of the Enum you’re in.

In this case, we have a very simple Enum that defines the configuration of the database for your application. You might want to choose between SQLite and Postgres to both ease local development and have a reliable system in production. The configuration for the two of them couldn’t be even more different because SQLite accepts a path to a file, while Postgres requires the URL, username, and password. With Enums, you can attach that information directly in the Enum.

You can put it in the same place as the variant, which makes it impossible to represent invalid state. With this, the compiler ensures for you that you are never going to be able to create a SQLite variant with a username and password. This greatly increases maintainability. It’s something that I dearly miss every single time I’m using a language that doesn’t have them. Once you start using them, you are going to want them everywhere.

Also, with Enums, you can only access the inner data after checking whether it’s the correct variant. You can’t just do, database.password, because you don’t know whether it’s a Postgres variant that actually has a password or if it’s SQLite that doesn’t have it. The compiler prevents you from doing that, which also helps ensuring correctness. There are multiple ways to check if the variant is correct before accessing, and the most flexible one is pattern matching, which allows you to check the shape of the data, check if the shape of the data corresponds to what you expect.

If so, allow you to bind the inner fields of that data structure as variables you can use. Pattern matching is also not a Rust innovation. It’s something that is present in a lot of languages, and due to its flexibility, due to how expressive and powerful it is, is something that is permeating through mainstream languages. We have seen Java and Python that implemented pattern matching recently. We’re seeing interest in C++ to also implement it. How does it actually look? We can see here, a match statement that checks the database, and we defined before. Depending on the kind of database you have, it connects to it in a different way.

Patterns are evaluated top to bottom. If we try to manually evaluate one, first we check whether we want to have an ephemeral SQLite database. In this case, we first check whether the Enum variant is the SQLite variant. Only in that case, we check whether the path equals to memory, which is a SQLite internal for create a database in memory not in a file. If that’s the case, Rust invokes the ephemeral storage function, which is the body of your pattern. There, that is only called if it’s an in-memory SQLite database. If it doesn’t match either the SQLite variant or the path, you go to the next pattern.

Where, as before, we first check whether it’s the SQLite variant. If so, we haven’t put any constraint on the path. The compiler creates a local variable called path that you can then pass to the function. This is how Rust guarantees correctness. Rust only allows to access path, if you first check whether it was SQLite. Then, if it wasn’t either SQLite, it does the same thing, it checks whether it’s Postgres. If so, it binds the three inner fields as variable that then we use to connect to the database and authenticate. This is a simple example of a match statement. You can do way more complex patterns that are still very similar to how you will define the data in the first place. They are an extremely concise way to define the check you want to have.

Crucially, match statements in Rust are exhaustive, which means that you cannot forget to handle some variants. You can of course put a default statement, if you don’t care about some of the variants. If in the example before, we remove Postgres, our pair programmer will tell us what’s wrong. In this error, it will tell us that we’re not checking all of the patterns in our match, because database Postgres is not covered. It points out to us where the database Enum is defined, so that you can go there and look at the documentation, and maybe see what you need to do to implement it. It also figured out that you can add the syntax to add a new match statement, which in this case is just a todo because it still doesn’t know how to actually implement the actual code, but still points you in the right direction. This is just a surface of procedural macros and Enums.

You can add procedural macros to do basically anything. There are procedural macros that check at compile time your database queries, whether they are correct. There are procedural macros that generate a nom for your tables all at compile time. Enums are just so powerful, that you are going to miss them everywhere. You can use Enums to represent state machines that cannot represent invalid states. You can use them to ensure the layout of your data, and ensure the compiler can check the layout of your data, all without losing any efficiency or performance, while still drastically increasing maintainability.

Leveraging Concurrency to Increase Efficiency and Performance

We have seen how we can structure our code to be more maintainable without losing efficiency. What if we actually want to increase our performance? The best way to do that is through concurrency, because compared to three decades before, nowadays, everything else has multiple cores. We go from single server chips with 128 cores to computers that have at least 8 cores basically everywhere. Even in our mobile phones, they have so much compute power. Rust allows you to tap into that. Let’s first see something that is not parallel, a Rust iterator. Iterators are something that is fairly popular in basically every single functional language, and allow you to access data and to transform data and collect data in a functional way. This is a very simple iterator that takes a list of numbers, creates an iterator out of them, converts them with a slow computation function, which we assume is going to be slow, and then sums everything together. If you have a lot of numbers, and slow computation is really slow, this is going to impact the performance of your program. How can you fix that? Rust has a third-party library that you can pull in called Rayon, which enables parallel iterators.

With parallel iterators, you just import rayon, and replace iter with the par_iter method, which will spawn the map across multiple threads automatically behind the scenes without having to do anything. This is also not a Rust innovation. For example, Java has parallel streams, but it’s scary to use them because you are introducing concurrency in a program that was never designed for it. Let’s imagine this was in a 10-year-old code base, that is 10,000 lines of code. It was never designed for parallelism.

Trying to do this will just cause extremely hard to debug concurrency issues. Isn’t that scary? Not with Rust. Because one of the things that is actually unique to Rust is not present in any other programming language, is what we call fearless concurrency. Which means both having the confidence to write parallel code and the confidence that you’re not making mistakes, that you’re not introducing data races. Also, and crucially, the confidence when adding parallelism to existing code, to an existing ancient code base that was never designed for it. This is an extraordinary claim.

Let’s see how it actually works in practice. We’re going to use this simple example to avoid throwing too much into it. This example, we first create some data, which is wrapped into Rc, which is a reference counted object that tracks how many copies of the data exists, so that when it goes to zero, it can be removed. Then there is a RefCell, which is a way to bypass some of Rust restrictions and move them around time. Inside of it, we have the actual piece of data. Then we create the runnable closure, which caused the process function on that data, and we spawn that in a different thread.

If you actually try to do this, the compiler will not let you compile it, because this program has a crucial concurrency bug. Rc is not thread safe. The way Rc tracks multiple copies, like tracks how many exist, is not incremented atomically, so you can have data races. You could introduce wrong behavior. The compiler detected this and the compiler pointed it out that you cannot do that. This is thanks to the Send trait, which is an interface that is automatically implemented by the compiler for you. You never have to manually add it.

With the Send trait, Rust figures out that if all of the fields in your struct are send, your type is also send. As long as one of the fields is not send, because, for example, Rc is not send because internally it uses some abstractions that are not thread safe, then Rc is not send and your type is not send either. That is how Rust catches this. Rust figures out it’s Rc, figures out it’s not send, and preventing you from doing this.

Let’s fix it. Let’s replace Rc with Arc, which is exactly the same thing as Rc but it uses atomic operations internally to avoid thread safety issues. The fact that Rc and Arc are different types is great because atomic operations are more expensive for the processor to use. You don’t have to use atomics everywhere in your program, just in the off chance that that specific type is going to be moved to a different thread. You can just use Rc and the compiler will point out exactly only the places that you need to make thread safe and incur the performance decrease of thread safe operations. If you actually try to compile that code, you get another error. Because in this case, RefCell cannot be shared between threads safely, because RefCell allows multiple parts of your code to mutate the data.

If you actually share it between different threads, you have different threads that at the same time can mutate the same data, and that is a data race. You really don’t want to debug that. In this case, again, the compiler will tell you, you cannot do that. You need to make sure that RefCell is thread safe. That is done with the counterpart of the Send trait, the Sync trait, which represent types that can be accessed concurrently by multiple threads. Again, this is something that is implemented automatically by the compiler with the same rule as the Send trait. If we now fix it by replacing RefCell with a lock, then our code will compile, and we will have made our code thread safe.

This is a toy example. It was fairly trivial for a Rust programmer to see what the problem was here. Think you have a very large ancient code base that you want to add parallelism just to the hot loop to make it faster. You don’t have to go and manually look at all of the types to make sure you’re thread safe. The compiler does it for you. This is because all functions in your Rust that can cause concurrency, so, for example, the function to spawn a new thread that we saw, or Rayon that creates a parallel iterator and processes data across threads.

They all require that the inputs implement the Send and the Sync traits. That’s why Rust allows you to fearlessly add parallelism, to do fearless concurrency, throw parallelism to something because you know that the compiler has your back and will prevent all of the thread safety issues you would otherwise encounter in any other programming language. Rust also has its own little twist on how locks work to make concurrency even easier to implement. Because in most programming languages, locks protect, not data, pieces of code.

You actually in practice want to use them to protect data, but you cannot guard the data. You need to manually know every single place that you access, or you modify the data and manually lock and unlock the lock around it. With Rust, locks encapsulate the data to protect. You actually store the data inside of the lock itself. The Rust type system only lets you access the inner data once you create the lock. As soon as you release the lock, the type system will prevent your program from compiling if it tries to access the data while it’s not locked anymore.

This is not actually valid Rust code. I created the unlock method to make it clear to understand. Notice that you still have to prevent deadlocks yourself. The Rust compiler cannot prevent your program from compiling if you attempt to create a deadlock. Still, that is probably not easiest to debug because it’s fairly hard to detect exactly why deadlocks happen, but is the easiest to detect that something is wrong. With data races, with corrupting data, when maybe 1 in 1000 times it accesses at the same time, that is way harder to detect. Rust prevents that, Send, Syncs.

The way that locks are designed enables fearless concurrency. It enables to add parallelism to your code. It enables to squeeze every bit of performance you can get from the hot path of your application that does most of the data processing without having to worry about it. Rust has even more to offer. There is native support for Async/Await that is actually even more interesting the more you look at it. You have a powerful type system apart from Enum that allows you to design extremely maintainable software.

Rust’s Interoperability with Other Languages

Rust makes it really easy to interoperate with other languages. If you want to take advantage of Rust, you can, but you don’t have to rewrite all of your code, you don’t have to rewrite all of your legacy application that was developed for 20 years. Rust was designed to be able to slowly replace the parts of your code base that could benefit the most, maybe the parts that will benefit from concurrency, or the parts that need the performance without having to deal with memory safety issues. That is actually the approach that Mozilla has taken to introduce Rust in Firefox, because Rust originally was created just by Mozilla just to improve Firefox.

When Mozilla launched Firefox Quantum in 2017, they rewrote the Firefox CSS engine. This led to a 30% speedup in loading amazon.com. This was not because the old code was inefficient. This was because the old code was single thread. It was a legacy C++ multi-threaded application. Mozilla deemed basically impossible to add parallelism to it without all of the protections that Rust gives you, without all of the confidence that Rust gives you. With Rust, Mozilla was able to just replace a small part of Firefox and add parallelism to it, and get such a big speedup. There are multiple ways to interoperate with Rust.

The way that is compatible with most languages is through C FFI because most languages nowadays have the ability to call into a C library. Rust can expose a C interface, you can create a C interface for your library, or your part of the program so that you can interoperate code from your application into Rust, code from Rust into your application. There is excellent tooling to generate bindings, excellent tooling to reduce the boilerplate you have to write.

Possibly, the most incredible one is PyO3, which allows you to integrate with Python, create Python native modules within your Rust. This is a fully working PyO3 module, written in the slide, where we can create a new function that sums two numbers and returns a string. Then we create a module with this function. If you compile this, you will have a fully working Python module that you can just import. The module is written in Rust, without any boilerplate, and without having to worry about any memory safety issues if you were to write it with C.

Conclusion

This is what I recommend, identify which parts of your code base will benefit the most from Rust. Don’t start blindly reverting anywhere, everything because that is often not the best approach. Think hard about where you can take the most benefit from Rust, and selectively start introducing it. I hope I showed you that Rust is useful beyond memory safety, that you can leverage it to increase your programming efficiency and performance, even if you’re using a high-level programming language.

I think a key of Rust success is not just its parallelism ability, its ability to be memory safe, but the fact that Rust successfully managed to merge two different worlds of the programming ecosystem that were completely separate, before. Because Rust brings a lot of innovations to low-level programmers. It allows them to avoid security vulnerabilities, improve safety, and leverage the modern developer experience, and the convenience of modern tooling that every other programmer benefits from.

While from higher level programmers, people that were not familiar enough or maybe scared about using C and C++, Rust allows them to squeeze every single bit of performance without compromising on tooling, without compromising on developer experience and on ergonomics. This is why Rust empowers everyone to build reliable and efficient software. This is why Rust has been the most loved language for 8 years in a row.

Questions and Answers

Participant 1: According to your experience, is a better learner an expert C++ developer, or rather a junior developer?

Albini: I don’t think there is much difference, because, in the end, Rust requires you to use a different model of programming than what you were used to in C++. Maybe for an expert developer, they have to learn not to go away with the C++ idioms they were used to, but also an expert C++ developer could appreciate even more and actually understand why the protections Rust gives you are there. Because they should do the same thing, but they don’t have your pair programmer, the compiler, to help you achieve that. It depends on the person’s style of learning, but I don’t think there is going to be much difference realistically.

Participant 2: Where do you see Rust not being used?

Albini: There is a simple answer, which is actually where I tend not to use Rust, which is for very quick scripts, so program like small things, because Rust is also fairly notoriously slow to compile. It’s getting faster. There is a lot of effort to make it faster. If you just need to write a quick script, maybe to do some setup in CI or something, it’s probably not worth it. The more complex answer is that different parts of the Rust ecosystem have different maturity because Rust is young.

Even though the first stable release was 9 years ago, in the world of programming languages, it’s fairly young. It’s a fairly young ecosystem. Depending on how much effort was put into those ecosystems, the experience you’re going to get is going to be drastically different. It might not be worth it to use Rust there. For example, in the game development space, there are some excellent projects that are already showing how Rust can benefit from them. It’s not yet as complete an ecosystem as you might get with C++, for example.

With UI development, there are some libraries that allow you to create nice looking UIs that are fairly easy to implement, but it’s still not as mature as Qt or other environments. It really depends on where you want to use Rust in. It’s up to every one of us to look at the current ecosystem there and see whether it’s enough for us, because even though it might not be as polished, it might still be enough for what we need to do, or see if it’s worthy to contribute to ecosystem to push it forward or just not being a reasonable business decision to adopt Rust there. It really depends on where you want to adopt Rust.

Participant 3: I’m still trying to wrap my head around the procedural macros. I might be wrong, but I have not seen it in other programming languages. It’s not really common, like functions. Is there an application that you can think of where functions can’t be used and procedural macros would be good?

Albini: A place where functions cannot be used and procedural macros can is like, for example, the clone implementation we saw before, because the clone implementation depends on knowing which fields the struct has. It depends on knowing the shape of the data you want to clone. That is information that you don’t have access at runtime. Even if Rust had reflection support, it will probably be inacceptable from a performance level in a lot of places. Procedural macros really shine where the code you need to generate changes depending on the shape of the data you have.

A good way to think about it is that procedural macros are something you would use where you would reach to reflection in other languages. If you were in Java and you wanted to use reflection, that’s what in Rust you have to use procedural macros, which on one hand is worse because procedural macros are harder to write because you need to actually parse the struct and then generate code from it rather than just invoking a reflection method. On the other hand, they bring the efficiency and maintainability things I mentioned before.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


150,000 students to benefit from enhanced skills programme of MongoDB – UNITED NEWS OF INDIA

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

More News

11 Sep 2024 | 6:32 PM

Bengaluru, Sep 11 (UNI) Garuda Aerospace, an IPO-bound Indian drone technology company, has partnered with Agrowing, a global leader in sensor and imagery technology with a focus on precision agriculture, to enhance aerial precision agriculture in India using advanced deep learning and artificial intelligence.

see more..

Sensex down 398 13 pts11 Sep 2024 | 6:21 PM

Mumbai, Sep 11 (UNI) Snapping a gaining streak of last two sessions, the BSE Sensex on Wednesday dropped 398.13 pts to settle at 81,523.16 on weak Asian Market.

see more..

11 Sep 2024 | 6:17 PM

Chennai, Sep 11 (UNI) CEAT, India’s leading tyre manufacturer, on Wednesday inaugurated a new Truck Bus Radial (TBR) production line in its Chennai manufacturing facility at Sriperumbdur, about 40 km from here.

see more..

Rupee ends steady at 83 97 against USD11 Sep 2024 | 5:45 PM

Mumbai, Sep 11 (UNI) The Rupee on Wednesday ended almost steady at 83.97, rising one paisa against USD on selling of US Dollars by bankers and exporters, dealers at the Foreign exchange said.

see more..

11 Sep 2024 | 5:41 PM

New Delhi, Sep 11 (UNI) Urja Bharat Pte Limited (UBPL), an equally-owned special purpose vehicle (SPV) of BPCL arm Bharat PetroResources Limited (BPRL) and Indian Oil Corporation Ltd (IOCL) has bagged a production concession from the Supreme Council for Financial and Economic Affairs (SCFEA), Abu Dhabi.

see more..

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Cassandra 5.0 promises better search after indexing redesign – The Register

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

The Apache Software Foundation Cassandra project has released the 5.0 iteration of the wide-column store database boasting new features to improve vector search, a Java update and enhanced performance.

Cassandra is designed to support highly distributed systems where writes exceed reads, and so-called ACID compliance is not important. Netflix, for example, has been using Cassandra since 2013, replacing Oracle databases and using the NoSQL system to support global accounts and customer data worldwide.

New features for the 5.0 release, made generally available on Thursday last week, include Storage Attached Indexes (SAI) which promises to boost query flexibility and performance, especially for large datasets. The approach to move indexing closer to the data, improves query performance, and replaces the original Secondary Index feature.

The database upgrade also includes Vector Search, which supports a vector data type and indexing for Approximate Nearest Neighbor (ANN).

“Cassandra 5.0 lays the groundwork for advanced AI and machine learning applications. Developers building Generative AI applications can combine Cassandra’s scale and distribution with the latest search technology,” the project said in a statement.

Sarma Pydipally, a Cassandra contributor and freelance database engineer with experience of large-scale systems in the telecoms sector, said the two updates would work together to improve the database’s performance in support of applications based on GenAI.

He explained that in earlier approaches to indexing, each node would only contain the index information for its own data, slowing the performance of distributed queries. “The SAI [Storage Attached Indexes] model is little bit different as it is attached to the data itself.”

“Storage attached indexes are going to change the way we create indexes in Cassandra. It seems to solve that indexing problem for Cassandra, and the vector search is totally dependent on it,” he said.

Patrick McFadin, developer relations veep at Cassandra vendor DataStax, said the net effect of changes to the way data is stored and the way storage is managed would mean organizations “can run less Cassandra.”

“You don’t have to use as many nodes to get the same amount of effect… that shows up as node density, you get much higher node density,” he told us.

The Cassandra 5.0 release also upgrades Java support to JDK 17, bringing performance improvements of up to 20 percent in some cases, according to the community announcement.

With the emission of 5.0, the Cassandra project announced the end-of-life (EOL) for the 3.x series, including versions 3.0 and 3.11. It said it would evaluate security patches contributed to unmaintained branches on a case-by-case basis while the application of CVE fixes is not guaranteed.

As of last year, a DataStax survey showed around a third of Cassandra users were on the 3.x releases, McFadin said. ®

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB sees benefit from AI as models become more accurate | Seeking Alpha

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB office in Silicon Valley

Michael Vi

MongoDB (NASDAQ:MDB) said that the company would be a beneficiary as large language models become more accurate and get better.

MongoDB’s CEO Dev Ittycheria, as well as CFO and COO Michael Gordon were speaking at the Goldman Sachs Communacopia + Technology Conference on Monday

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MDB INVESTOR ALERT: Bronstein, Gewirtz & Grossman LLC Announces that MongoDB …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

New York, New York–(Newsfile Corp. – September 9, 2024) – Bronstein, Gewirtz & Grossman, LLC, a nationally recognized law firm, notifies investors that a class action lawsuit has been filed against MongoDB, Inc. (“MongoDB” or “the Company”) (NASDAQ: MDB) and certain of its officers.

Class Definition

This lawsuit seeks to recover damages against Defendants for alleged violations of the federal securities laws on behalf of all persons and entities that purchased or otherwise acquired MongoDB securities between August 23, 2023, and May 30, 2024, inclusive (the “Class Period”). Such investors are encouraged to join this case by visiting the firm’s site: bgandg.com/MDB.

Case Details

The complaint alleges that on March 7, 2024, MongoDB reported strong Q4 2024 results and then announced lower-than-expected full-year guidance for 2025. The Complaint adds that the Company attributed this to a change in its “sales incentive structure,” which led to a decrease in revenue related to “unused commitments and multi-year licensing deals.” Following this news, MongoDB’s stock dropped $28.59 per share to close at $383.42. Then, on May 30, 2024, MongoDB further lowered its guidance for the full year 2025, attributing it to “macro impacting consumption growth.” Analysts commenting on the reduced guidance questioned whether changes to the Company’s marketing strategy “led to change in customer behavior and usage patterns.” Following this news, MongoDB’s stock dropped $73.94 per share to close at $236.06.

What’s Next?

A class action lawsuit has already been filed. If you wish to review a copy of the Complaint, you can visit the firm’s site: bgandg.com/MDB or you may contact Peretz Bronstein, Esq. or his Client Relations Manager, Nathan Miller, of Bronstein, Gewirtz & Grossman, LLC at 332-239-2660. If you suffered a loss in MongoDB you have until September 9, 2024, to request that the Court appoint you as lead plaintiff. Your ability to share in any recovery doesn’t require that you serve as lead plaintiff.

There is No Cost to You

We represent investors in class actions on a contingency fee basis. That means we will ask the court to reimburse us for out-of-pocket expenses and attorneys’ fees, usually a percentage of the total recovery, only if we are successful.

Why Bronstein, Gewirtz & Grossman

Bronstein, Gewirtz & Grossman, LLC is a nationally recognized firm that represents investors in securities fraud class actions and shareholder derivative suits. Our firm has recovered hundreds of millions of dollars for investors nationwide.

Attorney advertising. Prior results do not guarantee similar outcomes.

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/216213

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.