retail and high performance databases – breakinglatest.news

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

retail digital transformation Application development cloud computing developer storage

Fabio Gerosa, Sales Director Italy of Couchbase, explains why today’s retail companies must be able to count on a database designed to obtain superior performance and availability, to meet the expectations of online customers 24/7 and globally.

Today’s leading retailers and e-commerce companies are facing many IT challenges. They are faced with ever-increasing demands to deliver great customer experiences that are fast and personal, contextual and localized. At the same time, they have to manage volumes growing of users and data, while reducing costs and time-to-market. These expectations, combined with the competitive pressures of global giants, push to reformulate existing applications.

Retail – Dislocated data

Upgrading legacy systems can be a daunting task, especially for established retail businesses. Thousands of physical stores, in addition to an online presence and mobile applications, lead to having data dispersed across various systems. With the likelihood that most of these services and applications have grown organically over time, failing to provide a cohesive experience to customers.

World-class database performance

According to the estimates of the B2C eCommerce Observatory, the value total reached by online purchases in our country in 2022 is 48.1 billion, 20% more than in 2021. Organizations not only have to face competition from their sector – be it clothing, cars, electronics, office supplies or other products – but also that of Internet companies such as Amazon and eBay. These tech-savvy, digitally born companies are known for the way they continually improve their e-commerce applications. As well as the speed and agility with which they release new features.

Retain customers

The overall goal of these companies is to unify all customer interactions that occur across the web, mobile, and point of sale. And through that create a seamless experience that improves business and customer retention. And therefore it is essential to choose a database platform that allows you to scale and add new services with flexibility.

Retail – una user experience articolata

Organizations are increasingly complex. The retailer today has acquired new business units, added new channels or implemented programs such as customer loyalty monitoring, thus going through a series of technological evolutions. A traditional tech journey might start with a physical store. Then add a separate loyalty program, build an online store as a completely separate business unit, develop a corresponding mobile application. The user experience can include in-store, online and mobile shopping, home delivery and product pickup, a loyalty program and rewards/points.

The benefits of relying on a database with superior performance

As for the technologies in use, a current ecommerce platform can be built on numerous point technologies. Technologies that can represent a monolithic system of systems, making complex add more data and give users a better experience. this is due to the inability to customize and update. There is simply no agility.

Adopt NoSQL database technology

To meet the needs of today’s digital consumers, retailers are leveraging NoSQL databases to move from monolithic solutions to microservices-based architectures. Digital Economy companies are adopting NoSQL database technology to build and run modern web and mobile applications. Why? Because NoSQL is more effective than relational technology in meeting the performance, scalability, availability, agility and cost-effectiveness requirements of these applications.

More adaptability

These pressures create a new set of requirements for operational databases, which must:

  • Deliver data requests with sub-millisecond latency.
  • Scale to meet peak demand (such as Black Friday), easily and affordably.
  • Provide 24x7x365 availability.
  • Easily adapt as data types and queries evolve.
  • Acquire, aggregate and archive multi-structured data from many sources.
  • Support multiple channels and devices.
  • Replicate data between data centers globally.
  • Integration with other big data tools like Hadoop, Spark, Kafka, and more.
  • Speed ​​up and simplify development.

Offer a valuable customer experience

The operational database must also be versatile to support a variety of use cases. For example customer profile management, shopping cart, shop sessions, product catalog and prices. Or the 360-degree view of the customer and the management of the loyalty program, to name a few. For architects, developers, and operations engineers, the pressure to deliver great customer experiences, reduce costs, and accelerate innovation has never been greater.

A database with superior performance for retail

When it comes to customer experience, performance and availability are key. Whether customers are aware of it or not, their interactions happen with a database. They access data about products, customers, engagement, and if the data is not readily available, the customer experience will suffer.

The buyers they expect every request, whether it’s to find and view a product, add it to the cart or proceed to checkout, to be handled immediately, in sub-second times. It doesn’t matter what time it is or what time zone they’re in, whether it’s Black Friday, Cyber ​​Monday or Super Sunday. Shoppers expect ecommerce web and mobile applications to be available 24 hours a day, 365 days a year, anywhere in the world, on any device.

Always meet expectations

Historically, relational databases have been the bottleneck, due to the difficulty of managing variable and increasingly significant data volumes, but also due to the ability to maintain performance and availability. Today, businesses need a database designed for superior performance and availability that meets the expectations of a global, 24/7, online customer base. As a result, across all retail and e-commerce categories, from automobiles to fashion to office supplies and more, innovative companies are adopting NoSQL to improve customer experiences, operational efficiency and agility.

@media screen and (min-width: 1201px) {
.nddgv640285d0f0788 {
display: block;
}
}
@media screen and (min-width: 993px) and (max-width: 1200px) {
.nddgv640285d0f0788 {
display: block;
}
}
@media screen and (min-width: 769px) and (max-width: 992px) {
.nddgv640285d0f0788 {
display: block;
}
}
@media screen and (min-width: 768px) and (max-width: 768px) {
.nddgv640285d0f0788 {
display: block;
}
}
@media screen and (max-width: 767px) {
.nddgv640285d0f0788 {
display: block;
}
}

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: WebAssembly: Open to Interpretation

MMS Founder
MMS Rob Pilling

Article originally posted on InfoQ. Visit InfoQ

Transcript

Pilling: I’m Robert Pilling. I’ll be doing a talk on WebAssembly. I work for Scott Logic where we deal with things like financial software or government work, and general bespoke consulting. I’ve dabbled in trading systems, like web apps backed by the cloud, and even done a few years of mobile development as well. In my spare time, I thinker with compilers, explore WebAssembly. I run an internal Rust meetup, where we are currently mob programming our way through writing a trade matching web server. This is great for knowledge sharing and discussion, which usually ends up going down loose chips, and rabbit holes, and irons out some creases in knowledge around language.

Outline

First, I’ll be giving a high-level overview of WebAssembly. Then we’ll get into the meat of things. We’re going to write an interpreter for WebAssembly. To do this, we’ll have a quick look at compilers, cover the three main parts of our interpreter: the stack, memory, and control flow. By the end of the talk, I hope to have enough of an interpreter cobbled together that we can do a little bit of math with it. Watch this space.

Why WebAssembly?

Why WebAssembly? I got a lot of people saying that the browser is a pretty specific place to focus on. WebAssembly has applications elsewhere. As you may have heard mentioned, server-side compute, Lambdas or functions as a service, we can use WebAssembly for that, which allows us to nicely encapsulate our code. We can also sandbox local code with it. You’ve probably heard of the supply chain attacks. Things like Node modules, if that was WebAssembly, it’d be perhaps a little bit easier to sandbox what we have at the moment, which is JavaScript in the browser, so that’s sandboxed untrusted code. It’s difficult to make it go quickly. We need a lot of infrastructure for that. WebAssembly makes this a little bit more trivial. Then we move on to things like micro-frontends in the web. At the moment, if you want to have different apps in your web program, these are all isolated from each other via things like web iframes. They talk using postMessage, which means all of the messages between these different micro-frontends need to be serialized, and it can be quite slow. WebAssembly allows us to run different apps all within the same process, so it’s a lot faster. We can also target various different machine architectures with the same WebAssembly modules, which is really nice. LLVM is one of the tools that powers this. It’s a code generator, so it can generate WebAssembly for Rust, Swift, Objective-C, and C++, amongst other languages. That’s what allows us to have such a wide range of input languages. That’s WebAssembly.

Compilers

We’ve covered that, so let’s move on to compilers. WebAssembly works a lot like LLVM, so we can use it as an intermediate representation of our code. This is great. Any of those languages can now run in a browser in a local sandbox, or even say on a Lambda function as a service. We can build all of our code as WebAssembly, and leave the nitty-gritty details to the implementation. Browsers will then execute this or any WebAssembly horse, and we only need to ship a single binary. This is great, if a little confusing. I’ve thrown around a lot of acronyms, that’s all well and good. I find the best way to understand something is to write a bit of a crummy version of it. I’ve done this with side-scrolling games, HTTP, download these text editors and a C compiler. If WebAssembly had been a thing back when I wrote my C compiler, I’d have probably not gotten nearly as deep into writing things like the code generator that churns out the machine code. WebAssembly allows us to avoid register allocation, stack frame layout, and calling conventions to an extent depending on the source language that we’re building from. That’s compilers in a pretty high-level view of where WebAssembly comes in.

Stack

Speaking of writing compilers, what’s the simplest machine that we can write? A lot of you will have probably written one at some point, perhaps without even realizing it. A regular expression can be thought of as a little machine all on its own. We move it between states by feeding it characters, and we can take a peek at that state when we’re done, so once we’ve had its last character. If it happens to be in the accept states, or here if it’s in the rightmost one for the d character, then we say that the regular expression has matched. This is interesting. We can do this and yet the machine has no variables and no memory, which actually limits what it can do. What’s the next step? How can we improve this? While we can do a lot with regular expressions, believe me, we can still perform seemingly simple tasks. It’s impossible, for example, to write a regular expression to match balanced brackets, because it has no notion of counting or remembering.

This brings me to an interview question that a lot of you might have stacked up against. How can we tell if a string contains balanced brackets? Here’s the solution and the first bit of live coding. You might have stumbled across this or even written something similar yourself. We can use a stack. Here we have two examples. This first one contains balanced brackets, and the second one doesn’t. We’ve got this open parenthesis here, and then an angle bracket. Then I’ve just wrapped this in a little shell function, which calls our interview question, balanced, which tells us whether the brackets are balanced or not. This function just has a stack. Then for each character, if it’s an open bracket, then we push a closed bracket onto the stack, similar here. Then, otherwise, we expect to pop off from the stack, the most recent one that we saw. If that’s not what we expected, then we return false. Otherwise, we’ve got to the end of the string if we’ve exhausted the stack, then that means that we’ve got some balanced brackets. If I just run this, we’ll notice the first example here is true and the second one is false, which is exactly what we expect. This is pretty useful. We’ve got a way to count now. We can look at nested brackets the same, but actually, we can do a lot more than that. We can now write passes for a large amount of programming languages, in fact, all just with using a stack. Surely, a stack, it seems easy. Is it that easy to write? Maybe we could have a look.

Let’s take a look at this stack.rs that we have here. Now, first stack, we need a way to create it. We’re going to need a way to push something onto the stack pop. It would be useful if we knew how to tell if the stack is empty as well. For this stack, we’ll need some backing storage, so let’s just use a vector. It’s a pretty ubiquitous datatype. We can have a go at implementing these functions. The new function, we just want to return a stack, and we’ll just initialize the vector there. Push, I suppose we want to take some kind of t. Then we’ll just follow that onto the vector, so we’ll just say push that onto the end of the vector. Pop is going to be similar, except this time instead of taking a t, we’ll return a t. Again, we’ll do a similar thing, except instead of pushing, we just pop from the vector. This gives us an option, so we’ll just have to unwrap that and just assume that it will always succeed just to make things a bit simpler down the line. Finally, we’ve got is empty. For this we’ll just want to return a Boolean. Again, we can almost cheat and just say if the vector is empty, then we’re empty. I’ve got a test down here, that just pushes and pops onto the stack. We start out with a stack 123, it shouldn’t be empty, we push 4, and then when we pop, we should get the 4 back. Then we’ll get a 3, a 2, the 1, and then it’s finally empty. Let’s give that a little run. We’ll compile it first. We just compile it with tests to check that that all works. Once that’s built, if we run it, we should see our tests all succeeding. This is done. We’ve written a simple stack.

While stacks look simple, they’re actually pretty magical, because they allow us to do one of the most exciting things in the world, math. One of the early things we learned in school is that we can’t answer this equation by saying 12 minus 5, and then we add 2 to that, and then we times the whole thing by 3. It doesn’t work like that. We’ve got to group the operations which we can arrange in a little tree. It’s a bit like this. The tree helps us do things in order. We can perform a traversal where we walk along the tree to calculate the result. In fact, we can just demonstrate a little executor to do just this for us. Up here, I’ve just got some code, and we’ll just ignore the topic for now, and we’ll take a look at this bit. We’ve got an operator, and that operator is a plus. Then, what are we adding together? We’re adding together the result of this operator, which is minus a 12 and a 5, and then also this operator, which is times and a 2 and a 3, which corresponds to part of the tree.

If we look at what we do with this code, we just say .eval on it. How does that work? If we look up here, the eval, an operator will then have a left and a right-hand side. It’ll leave out both of those. Then depending on what operator we are, we’ll move on to add or subtract or multiply. We’ll just do that recursively. This can’t go on forever. At some point, we hit a leaf like the 12 here. I suppose we need some value. To evaluate a value, it is just the value itself. Let’s give that a run and see what it prints out. We get 13, which is actually the right answer, which is a relief. What we have here is called an interpreter. The Python code runs, and the Python code is what cranks and calculates it. We want to do a bit better than that. We don’t want Python around to interpret an abstract syntax tree, which is what this bit is called. Let’s instead try to generate some code. What we have here is pretty similar, except there’s a few crucial changes, this time instead of printing something out, we just emit some code at the end. Our syntax tree is exactly the same. This time, for a value, instead of actually returning it, we don’t want to return anything, we want to actually emit some code. Here, we’re just going to use a stack familiar. We’ll just push whatever our value is onto the stack. Then an operator is going to behave similarly. We’ll get the left-hand side to emit itself and then the right-hand side to emit itself. Then we’ll pop from the stack into, let’s just call these two variables, so we have an l and an r. We’ll pop into the r, because the right-hand side was the most recent thing that we emitted. Then we’ll pop the next one into the L, which corresponds to the left-hand side. Then, as before, if it’s a plus, we do a plus, if it’s a minus, we do a minus, and so on. We’ll push that result back onto the stack. If we run this, we now don’t actually get an answer, we get what looks like a lot of gibberish to do with pushing and popping onto the stack.

This is interesting. We still need to compute our equation. We need some little machine language that helps us out with this. Here’s how we might do that. We’ll have a machine. I suppose the first thing that we need in this machine is a stack. Let’s just add that in. We’ll call it a stack of values just because these generic values we’ll deal with. Then, let’s create one of these machines, so this will be up here. We’ll just initialize the stack to an empty stack. We want some way of running some code. We’ll just say, m.run, and we want to give it some code in here. Rather than typing out this code by hand, I think we can probably use the code gen from earlier. Let’s just tweak this a little so it looks a bit more Rusty. We’ll have some instructions. For this constant here, we’ll probably say instruction, Const, and then we’ll just create an I32 with the value that we’ve got, like so. Then we’ll want to do something similar for up here. We’ll get rid of those pops and we’ll have that be implicit in how the instruction works. For the add, I suppose we want to say we’ve got a binary operator, and this binary operator is an add instruction. Then we’ll want to do something similar for the times and the subtract, so we just update that, like so. Then if we run this code gen, or if we read from it into here, we’ve now got our Rust code with some pretty simple operands ready to go.

How do we get the result of this out? We need to see what’s at the bottom of the stack, so let’s just print that for now. We’ll just say, pop from the stack. We’ve got these instructions, but we don’t actually have any implementation in our run method up here. Let’s just pop something in here. We say for each instruction, what we’re going to do is we’ll effectively just switch on that instruction. Maybe we’ve got this Const instruction, which we’ll just use to push onto the stack. Or maybe we’ve got this binary operation instruction, in which case, we’ll want to go a little bit deeper and just decide on which binary operation it is. We’ve got either add, sub, or multiply, so we’ll just pop those in here. For an add, we’ll want to add some left-hand side to some right-hand side. Then similarly for the subtract and multiply. We just update those. That will give us our results. We’ll just call that x. We want to push that onto the stack at the end a little bit like this. Where do we get those from? Similarly, to before, we want to pop the right-hand side first and then left. Let’s do that. Then I suppose there might be some other binary operators, we’ll just ignore them for now and pop in a panic. Same for other instructions as well. This is our machine. Let’s see if it compiles. We’ve now got a machine and the output is 13. What we’ve done here is we’ve taken our syntax tree that we originally had in that Python code. We’ve generated some instructions for it, and we’ve now evaluated these in our little Rust machine. What’s the use of this? Wouldn’t it be handy if we could change the numbers, for example, and make our machine a bit more like a CPU, and tack on some features?

Memory

We’ve covered the stack and generated some stack code using pushes and pops to evaluate an equation. Stacks are great, but we can only look at the top value. Memory is a bit more powerful. Let’s take a little look at that. We want our machine to be able to access a big flat array of bytes memory. Let’s see what changes we need to make to our machine for that. First of all, we want some memory, actually within our machine, and we’ll just say it’s a vector of bytes. That’s easy enough. Then we want to initialize that down here. Again, we can just say, create an array, we’ll just go with 64k of memory. We’ll just pop that into a vector. Now let’s have a look at this 12 here, let’s maybe make that living memory instead. What we’ll do is we’ll need to have an address for it. We’ll just call it a, and let’s have it live at address 42, why not? What we’ll want to do is rather than having the 12 hardcoded here, we want to do a load from this address. We’ll need some load instruction. Our 12 is gone, so we’ll need to initialize that as well. Let’s have some store on our machine. We want to store this 12 as a value, so we could say it’s still an I32. We’ll just pass that over to the store method of our machine. We now load in the 12 indirectly from memory. At the end of this, we want to have some way of storing the result, so let’s have a store instruction. This store instruction will take this whole value here, which is calculated and it will store it at an address. We’ve got the value first, which is this part, and we also need an address. We’ll have that as address r, which we’ll call our result. Let’s just pop that in memory address 12 for instance. That means now rather than popping the stack, we just load from that address, so we’ll just say, load from address r, and there’s a little bit of Rust casting going on. That should give us our result. We’ve changed our bytecode here. We now just need to implement these instructions, so a load in the store. Let’s just move up here, and first of all let’s do a load. If we’re loading, I suppose we’ve got an address to load from, which we’ll just pop off the stack. For this, we will then get our value. We’ll just say self.load from that address, and we’ll do a bit of a conversion there. Then we’ll push that back onto our stack.

Store is going to be pretty similar, except this time we’re going to have two operands instead of one. First of all, have the value. We’ll say let value equals, like that, and then we’ll get the address. Then we want to say self.store. Again, we will delegate to this store function. We’ll store to a particular address, this value. Now we have just these two functions to implement. Let’s just go to there. We’ll do the load one first. This one is perhaps a little bit simpler in implementation. We’ll just take an address, and we’ll give back a value. There’s a little bit of Rust wrangling that we need to do, that I’ll explain shortly. We need to decide with this memory, what it looks like. It’s just an array of bytes, but we’re dealing with 32-bit integers. I suppose we’re going to have a set of bytes, and four of them in a 32-bit integer that we’ll want to load from memory. We’ll just say self.memory, a particular address and address plus 4. We’ll take a slice of four bytes of that memory. Then this is where our try_into comes from above, which is where we’re just asserting that this particular size here is of size 4, and so it will fit into this array of 4. That’ll always succeed. We can just effectively assert that at the end. That’s great. We’ve got our bytes. How do we convert that into a 32-bit integer? We’ll just match what most modern machines do. We’ll say these bytes are little endian. What that’s saying is, if I have the value 123, that’s actually stored in memory as 3, 2, and then a 1. There’s a few reasons for this. If we have a pointer to the start of it, and we truncate it, we actually just still get a 3, which is what we want, in most cases. We’ll create an I32 from those little-endian bytes. That gives us our value. That’s done.

Our store function is going to be pretty similar. This time, we’re mutating ourselves, obviously, and we’re going to want some value to store instead, and we won’t return anything. This time, we’ll have our bytes and we’ll get that from the value itself. Let’s just do up here. We’ll get the bytes from the value. Then we want to assign that into memory. We’ve got this slice of 4, and we’ll just say, copy from our local slice of bytes here. That’s our store and our load. We’ve made a few decisions on things like endianness of our memory. Moment of truth, let’s see if this compiles. It compiles. Does it run? We get exactly the same answer. It may seem like we’ve done a lot of work there for no benefit, but what we’ve actually done is added a whole new feature to our machine. It actually turns out, this is how languages like Java work. We’ve got a little Java program here that I’ve called eval. This program takes three inputs, it times these two of them together, and then adds the third one on to that. If we compile that, we get a little eval.class file. We can actually dump the bytecode that the Java has generated for us here. If I say javap -c Eval, this is the bytecode. Ignoring the first bit, you’ll notice we’ve got our eval function, and we have some loads, multiplies, and then returns. What this is doing is exactly the same. It’s a stack machine that it’s pushing and popping from. I think we’ve made some good progress.

Goto

We are missing one major feature from our machine. Java has if statements and loops, which we don’t have yet. Now, how would we implement an if? We can do some test, I guess we need. Then we need like a goto, which a lot of people tend to find morally offensive. There is a way around it. What we could do is we could have some branch instruction, which will behave as like our test or check. Then we need to decide where to branch to. We could say there’s two choices. We might either want to restart a series of instructions to do them again, or we might want to skip ahead and just avoid doing a collection of instructions. That gives us two things, a loop and a block. That will look a little bit like this. As any game developer would probably point out, you can do anything with enough if statements and while loops. This is great, we can now do anything. Now, our branch instruction here, this br_if has an index in which block or loop we want to escape from, and then we either go to the start of the loop or the end if it’s a block, which is what the arrows demonstrate here. I’ve also made up some call instructions here just for demonstration. You notice on the right, we’ve got this plain br instruction, which is a branch that will always be followed.

Why don’t we have a go at implementing that in our machine. We’ll get rid of these math evaluations from before. What we’re interested in this time is adding up all of the numbers between 1 and 100. Here’s our pseudocode, and that will look a little bit like this, so we just pop this into our run. Just bound some brackets. Then I’ll talk you through it. We’ll start with a t, t for total, is 0, and then for each number in 1 to 100, we’ll just add that number to i. There’s faster ways or better ways of doing this, but I think this is a good test of control flow. We need to decide where to store our t, where to store our i, and we’ll have our limit variable. As before, we’ll just initialize these two variables. Total will be 0, and i will be 1. What we’ve got here is we now have this loop instruction, which then contains more instructions itself. I’ll just talk through that. We’ve got the loop block. Then if we have a look here, we’ll load i, we’ll load t, and then we’ll add them together. Then we’ll want to store that. We’ll take the value and the address t, and we’ll store. That’s adding i to t. Similarly, below, we then take i, we load it. We then take 1, add it to i, and then we take i’s address and we store it. We’ve added i to t, added 1 to i. Now we want to do a check. Let’s load i again. We’ll load our limit plus 1, so we’ll load 101, and we’ll subtract these. What’s going on here is we’re wanting to see if this subtraction comes to 0, because our test is basically going to say, is register 0? If it’s 0, then we’ll not take advantage, otherwise we will. If we’ve taken away 101 from i, and that 0, then that means i must be 101. In that case, we don’t want to branch, otherwise, we’ll say branch to a depth of 0. The 0 just means that we’re going to go to this outermost loop here and continue the loop. Finally, at the end, I suppose we want to load our t and see what our result is.

We’ve got our bytecode here, and we just need to implement a couple of these new instructions. Let’s have a look at that. I suppose the first one that we want to implement is our BrIF, and that’s going to have some depth. Now with this, what we want to do is look at a condition that’s on the stack, so let’s just pop that off the stack first. We’ll just say, if it’s Boolean, so we’ll just do a bit of an into to get that, then we need to return some way of saying break at this depth. Let’s just invent an enum for that, let’s just go up here and we’ll create that. We’ve got our finish enum, and we can either finish code normally with a done, or I suppose we can break at some depth, we’ll just use a u32 for that. We’ve got that. I suppose we want our unconditional version of this. We’ll get rid of the if. We’ll now pop the value, and we’ll just say break at that depth. Those are our break instructions. Next, we need loop instructions. Let’s pop that in here. We’ve got a loop and the loop contains some instructions. How do we implement a loop? That’s easy, we can just pop it inside a loop. What we want to do is we want to run these instructions. Then, depending on how they finish, affects how we finish. If they come to some normal conclusion, then that’s fine. We’ll just complete ourselves as well. We’ll break, and this break will just get us out of this loop. If they come to some kind of a break, then this is where things get a little bit more interesting. I suppose if it’s breaking and the depth is 0, then that means it is us, so we’re saying we want to branch back up to this loop. We’ll just say continue here, but that’s only if the depth is 0. Let’s just inspect that. If the depth is 0, then we’ll continue. Otherwise, the depth is greater than 0, so we want to propagate this break. Let’s do that. When we propagate it, we just subtract 1 from the depth to make sure that it nests properly. If we’re breaking out of here, then we’ll subtract 1, and that will take us up to referencing the next block.

Our block instruction like our loop is going to be pretty similar. Let me just call that up. We’ll have a loop here. Only this time if we hit a done, then there’s no loop, so there’s nothing to break from. A done will actually just do nothing. Let’s just bring that back in. If we hit a break, then that means we want to finish executing the code that we’re at if depth is 0, otherwise, we want to propagate it. We can just say if the depth is greater than 0, then propagate the break. Let’s see if that compiles. We’ve got a few things here. Our actual function needs to return some finish, so we’ll just add that in. The normal way of finishing will be just the normal done. Let’s give that a while. When I run this now, what we’re going back to is our code from before, where we’re counting off the numbers between 1 and 100, and that should sum to 5050, which it does. I’m pretty happy with our little machine that we have now. We can take some machine code and we can execute it. It’s quite laborious writing all of this machine code to run on our CPU, though. We’ve done a few nerdy examples here. I’m after running something a little bit larger, but I don’t want to have to mess around trying to generate all of this bytecode. It’d be handy if someone could do this for us.

Mandelbrot

It just so happens that my machine that we have here is WebAssembly. Luckily for us, my colleague, Colin, wrote a WebAssembly compiler which can generate all of these little pushes, pops, consts, and so on for us. Just to clarify that, because it’s a lot to take in, we’ve got a machine that can execute a bunch of instructions that we’ve made up. We need a machine to actually generate those instructions in the first place, so that’s the code generator that gives us the WebAssembly that we feed into our machine, and then eventually we get the output. Colin’s talk, what was that all about? It’s a good overview for how to put together a compiler. Colin invented a language called chasm as shown here, and using that he called it a program which can generate an image of a Mandelbrot. At the end of his talk, Colin demonstrated this code by running it in a browser, which then displayed this Mandelbrot image. We’re going to take the place of the browser. How did I go about this? I went to the website that Colin’s made, and what I did was, you can either interpret this chasm code, or you can run it as a compiler. I ran that partway through, and I paused it before I was about to execute it, and I pinched all of the WebAssembly code from Colin’s chasm that had been generated. This gave me this mandelbrot.wasm. We’ll just have a look at that.

Here’s the file. That’s a binary format, so we can’t actually see what’s in it. There’s a few tools that we can use that will do that for us. If I run this wasm2wat tool, that will give us the WebAssembly in a text format, and it stays nicely indented, so we can have a rough idea of what’s going on. There’s a few loops here, and so on. Let’s get that into our machine. We’ll just get rid of this bit to begin with. Now, we run the same thing, wasm2wat, and we’ll run it on the Mandelbrot. This is great, so we’ve got it into our machine. The problem is, we can’t execute this. This isn’t Rust code, but we can tidy it up a little bit. We just get rid of some of the things that we don’t need. Then we need to just convert this into the bytecode that we had before. It’s a line-to-line conversion. It just so happens, I’ve got a throwback to earlier, so talking of regular expressions, they might not be powerful, but they’re really useful. Here I’ve got some that can just convert all of these WebAssembly code into Rust for us. Let’s just give that a go. Do all the substitutions and here we go. We had before the WebAssembly and now we’ve got it into Rust. That makes our lives a lot easier. I’ll just sort out some brackets like this. Let’s check, we’ve got everything right. That’s our WebAssembly code.

There’s a few extra instructions here you might have spotted. First, we’ve got a few extra operators. We’ve got this BinOp, this less than signed operator, and a couple of others. We just bring those into our machine up here. Always wanted to say this, here’s one I made earlier. Here are binary operators, so we’ve got divide and less than, so we just pop them in there. Then we’ve also got this new unary operator. What this does is rather than the binary one which takes two, this just takes a single value. We’ve got an equal to 0 and a truncation one, so we’ll just deal with that. It’s the same story, you pop something off the stack, do something with it, and then push the result back onto the stack.

If we go back down, you also notice there’s all these locals, which we can get and set now. That’s how Colin implements these variables, so x and y. A local is like a fixed bit of memory. We can just implement that similar to our memory up here. Again, we’re just putting that in. We’ll just treat the locals as a bit of a dictionary. We’ll get the indexed local, and we’ll push it onto the stack for a local get. For a set, we’ll just insert that into our dictionary. We need to create this dictionary. Let’s just do this here. Let’s use a hash map. It’s an index that maps on to a value, so we just need to bring that in as well. I suppose we’ll need to pass that around wherever we call run. We’ll just say, instructions and locals like that, and right at the end here as well. We’ve got our local passed around, and we’ll just create them up here. We can just assume that Colin’s code will always initialize a local before using it, so we don’t need to worry about any of that. There’s one other instruction that we need to look at. This is a store instruction, actually a store8, which is what Colin used for generating the bitmap. We want to just take the bottom 8 bits of a value and store them. Let’s just implement that up here near the store instruction. Similarly, to the store, this one also pops a value, pops an address. Then we take the bytes and then we’ll just take the bottom byte, so the zeroeth byte and store that at that address. Those are the extra instructions now added.

There’s one other thing though, we’re generating a Mandelbrot, but at the moment, we’ve just been popping a single value from the stack. How can we view this? The way that Colin’s code works is it generates the Mandelbrot in memory, so it’s like a 100 by 100 image. If we have some bitmap, I suppose we want to have like an image. We’ll just say, it’s 100 by 100 image. Then we need to populate this image from the memory that’s in our machine. Then once we’ve done that, we’ll just print it out. Actually, let’s take a look at it. How do you do that? We can just go over all of the coordinates. For each coordinate, we’ll get the eighth byte of memory. We’ll just say, the image at that coordinate is that byte. This will take us left to right, top to bottom of the image. Let’s see if that works. We’ve got a few imports missing, and I just need to add. Let’s add one. We need to bring in our image and pass a reference to our locals here as well. There’s a reference to our locals, and our image is in this tty_bitmap module. We just bring that in and use the image from it. There’s some unused variable, that doesn’t matter. This is great. When we run this, it should generate a Mandelbrot. This is really the white-knuckle ride, because it’s quite a bit of code and we have no idea whether this will actually work correctly, because I’ve just cobbled this together. Here’s the moment of truth. Hopefully, it will give us a Mandelbrot image at the end. There, the program’s done. What do we have here? It’s the same Mandelbrot image from Colin’s, rotated a little because there’s differences in where the y axis starts. There it is, Colin’s code actually running on our little machine. I’m really pleased with that. I’m glad that worked.

What’s going on here? You could say this is a chasm program running in an awful interpreter. Do not use this in production. The software for it is chasm code, or WebAssembly. You can call it whichever you like, both is true. To quote Obi-Wan, it depends on your point of view. It’s pretty amazing. We’ve taken this WebAssembly thing, and we’ve run it nowhere near a browser just from something that we’ve knocked together in the past 30 minutes, and we’ve still got the same result. We can peek behind the scenes and decide actually how we want this WebAssembly code to run. We can not only run WebAssembly code, but anything that compiles to WebAssembly. We can run C code, C++, Swift, or even perhaps Rust code as WebAssembly, then inside our Rust interpreter, which I think is pretty cool.

Recap

We’ve managed to go all the way from having a simple stack up to running pretty arbitrary code on our little machine. I think this is amazing. It really is turtles all the way down.

Questions and Answers

Eberhardt: We’ve got a few observations. One of them is Mandelbrot fractal! It’s my favorite fractal. I don’t know about you.

Pilling: Yes, mine too. It would have been cool if I could have added a bit of a zoom and just go into the Mandelbrot and see if it’s turtles all the way down there. It’s got to stop eventually.

Eberhardt: I’ve heard that it doesn’t. I haven’t got the computing power at my disposal.

Pilling: Same. Especially not if you’re running that interpreter that I wrote as well.

I quite liked that about WebAssembly. Once you can grasp the main instructions that you have, so stores, loads, and so on, you can tell what everything’s going to do. It like opens up the rest of WebAssembly for you really. It’s just a very accessible language.

Eberhardt: Yes, it’s super simple. Have you done anything with other assembly languages before then?

Pilling: Yes. I’ve done a little bit of tinkering with x86. That’s the central language that most of our machines run on. Unless you’re on one of the new M1 Macs or a phone, of course, which is Arm. In my spare time, I’ve written a C compiler, which targets those. That’s quite a bit more difficult than WebAssembly. There’s null safety there. If you’re writing C or anything like that, you get the whole jungle, as well as the banana and the gorilla that’s holding on to it.

Eberhardt: How does the instruction set actually compare? Because I know the WebAssembly instruction set quite well. I couldn’t tell you all the instructions, but I know there are about 40 or 50, and they’re all really quite simple. In x86, how does it compare? Is it a larger instruction set? Are the instructions in some cases a little bit more complicated in terms of the operation you said before?

Pilling: Yes. There’s a lot of routes that we could talk about there. When you’re asked about the size of the instruction set, so for x86, there’s something ridiculous, like 50,000 instructions, and this is including all of the Single Instruction, Multiple Data like super scale.

Eberhardt: Yes, SIMD massively multiplies.

Pilling: Yes, exactly. Then you’ve got a lot of built-in instructions that aren’t really used anymore, but are just there for backwards compatibility, so running it in 16-bit mode, or you’ve got things like figuring out sine, so trigonometric functions that no one uses because they can actually be implemented faster in software.

Eberhardt: There are trig functions on the CPU?

Pilling: Yes, exactly. I think there’s several cryptographic instructions on the CPU as well. There was a bit of a conspiracy about that where the Linux kernel was going to use some of these and some people were like, “No, we can’t use that. We don’t know the source of randomness. We don’t know if it’s cryptographically sound,” and so on. Yes, the instruction set has really swelled over the years and there’s quite a lot of scope creep. If you take it back to its basics, like the original 8086 chip, you can draw a lot of parallels with WebAssembly. You’ve got the basic add, multiply, and so on. Then you’ve got your loads and stores, your calls and returns and whatnot. I always get the feeling with WebAssembly, it’s a lot safer, and it’s more structured. Loops, as you saw in the talk, it’s already demarc’d, what exactly will be looped over. Whereas in x86, you just jump and it just so happens to make a loop.

Eberhardt: By jump, you effectively mean jump to a different memory address.

Pilling: Yes, and that’s very arbitrary as well.

Eberhardt: Whereas with WebAssembly, it’s break to a particular stack depth.

Pilling: Yes, exactly that. Which you would think one would be more powerful than the other, but it turns out, they’re both equally as powerful and equivalent in the end, which I also find interesting.

Eberhardt: From a security perspective, break to a stack depth is inherently more secure thank jump to a random address.

Pilling: Yes, exactly, and just as easy to generate code for.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Gradle 8.0 Provides Improved Kotlin DSL and Build Times

MMS Founder
MMS Johan Janssen

Article originally posted on InfoQ. Visit InfoQ

The Gradle team has released Gradle 8.0 featuring a reduction in the build duration and an improved Kotlin DSL that supports Kotlin 1.8 and Java 11 features in the build scripts.

Kotlin DSL, first introduced in 2018, may be used as an alternative to the Groovy DSL and offers improved content assistance, refactoring and documentation in supported IDEs such as IntelliJ, Eclipse, Apache NetBeans and Visual Studio Code. The latest release allows .gradle.kts scripts to use Java 11 features instead of Java 8.

An interpreter is now used for the declarative plugins {} block inside .gradle.kts scripts which reduces the compilation time by about 20%. Version catalog aliases for plugins, such as alias(libs.plugins.mavenPublish), and type-safe plugin declarations, such as `my-plugin`, are not yet supported by the interpreter.

The embedded Kotlin was upgraded to 1.8.10 and the Kotlin DSL now supports the Kotlin API Level 1.8 instead of 1.4. Gradle 8 supports Kotlin Gradle Plugin 1.6.10 and newer, however a lower Kotlin language version might be used, without support, by changing the language version and API version setting inside the Kotlin compile task. Gradle 8 supports Android Gradle Plugin 8 and newer, or 7.3 and newer, when specifying the property: android.experimental.legacyTransform.forceNonIncremental=true.

Gradle now uses the directory hierarchy of included builds to prevent conflicts. Consider, for example, the following directory layout:

settings.gradle
levelOneDirectory
    settings.gradle
    levelTwoDirectory
        settings.gradle

Previously, the level levelTwoDirectory directory could be compiled with gradle :levelTwoDirectory:compileJava, but now it should be compiled with gradle :levelOneDirectory:levelTwoDirectory:compileJava.

Tasks of a buildSrc build may now be run directly on the command line. For example, gradle buildSrc:build may be used to run the build task in the buildSrc build. The new release allows the buildSrc to include other builds by defining the pluginManagement {includeBuild(anotherBuildDirectory)} or includeBuild(anotherBuildDirectory) in the buildSrc/settings.gradle.kts or buildSrc/settings.gradle settings scripts. Tests for buildSrc are no longer executed automatically as the build task is no longer run.

The Configuration cache is an incubating feature to reduce the build time by caching the result of the configuration phase and using the cache of subsequent builds. Gradle recommends starting with caching simple tasks first and, if successful, try more advanced tasks. By default, the cache is disabled but it may be enabled with the gradle --configuration-cache command of by adding the org.gradle.unsafe.configuration-cache=true property to the gradle.properties file. This release automatically runs tasks in parallel from the first build whenever the configuration cache is enabled.The retention period may now be configured, in order to cleanup the cache in Gradle user home after a specific amount of days:

beforeSettings { settings ->
	settings.caches {
    	downloadedResources.removeUnusedEntriesAfterDays = 45
	}
}

The string format of JavaVersion used for sourceCompatibility and targetCompatibility no longer contains the 1. prefix for Java 9 and newer. Gradle 8.0 supports Java versions 1.8, 9, 10 and 11 instead of the Gradle 7.6 equivalents 1.8, 1.9, 1.10 and 11.

Release 8.0.1 is the first patch release and the Gradle team recommends using the latest minor version. The GitHub releases page lists all the available versions.

The latest Gradle release may be installed via SDMAN! with the sdk install gradle 8.0.1 command, via Homebrew with the brew install gradle command or as a ZIP file via the Gradle Releases page.

Gradle recommends running the gradle help --scan command and viewing the deprecations section of the report before upgrading. After that, the plugins should be updated and, finally, the gradle wrapper --gradle-version 8.0.1 command may be used to update the application to Gradle 8.0.1. The Troubleshooting Builds guide may be used to resolve build issues. More information on upgrading an application from Gradle 7.0 to 8.0 can be found in the user guide.

Gradle now displays an error, instead of a warning, when using the finalizedBy, mustRunAfter or shouldRunAfter methods. Invalid Java toolchain configurations and automatic downloads without providing repositories now result in an error, which may be resolved by following the user manual.

Gradle 8 removed the --add-opens argument to open JDK modules, java.base/java.util and java.base/java.lang , for Gradle workers. This means the workers, by default, can no longer use the reflection functionality from the JDK internals. Warnings and errors displayed by tools, extensions and plugins can be resolved by updating them.

Configuring the test framework after specifying the test options now produces an error:

test {
    options {
    }
    useJUnitPlatform()
}

Which may be resolved by using the JVM Test Suite Plugin, or by configuring the test framework before specifying the test options:

test {
    useJUnitPlatform()
    options {
    }
}

Various API’s, methods and features have been removed and the complete list of changes, about the latest release, can be found in the Gradle 8.0 Release Notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Is Mongodb Inc (MDB) Stock About to Get Hot Thursday? – InvestorsObserver

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

News Home

Thursday, March 02, 2023 02:06 PM | InvestorsObserver Analysts

Mentioned in this article

Is Mongodb Inc (MDB) Stock About to Get Hot Thursday?

Overall market sentiment has been down on Mongodb Inc (MDB) stock lately. MDB receives a Bearish rating from InvestorsObserver Stock Sentiment Indicator.

Sentiment Score - ,bearish
Mongodb Inc has a Bearish sentiment reading. Find out what this means for you and get the rest of the rankings on MDB!

What is Stock Sentiment?

Sentiment uses short term technical analysis to gauge whether a stock is desired by investors. As a technical indicator, it focuses on recent trends as opposed to the long term health of the underlying company. Updates for the company such as a earnings release can move the stock away from current trends.

Sentiment is how investors, or the market, feels about a stock. There are lots of ways to measure sentiment. At the core, sentiment is pretty easy to understand. If a stock is going up, investors must be bullish, while if it is going down, sentiment is bearish.

InvestorsObserver’s Sentiment Indicator looks at price trends over the past week and also considers changes in volume. Increasing volume can mean a trend is getting stronger, while decreasing volume can mean a trend is nearing a conclusion.

For stocks that have options, our system also considers the balance between calls, which are often bets that the price will go up, and puts, which are frequently bets that the price will fall.

What’s Happening With MDB Stock Today?

Mongodb Inc (MDB) stock has fallen -2.59% while the S&P 500 is up 0.54% as of 2:03 PM on Thursday, Mar 2. MDB has fallen -$5.43 from the previous closing price of $209.97 on volume of 1,151,422 shares. Over the past year the S&P 500 is lower by -7.74% while MDB has fallen -46.43%. MDB lost -$5.37 per share in the over the last 12 months.

To screen for more stocks like Mongodb Inc click here.

More About Mongodb Inc

Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users. MongoDB provides both licenses as well as subscriptions as a service for its NoSQL database. MongoDB’s database is compatible with all major programming languages and is capable of being deployed for a variety of use cases.

Click Here to get the full Stock Report for Mongodb Inc stock.

You May Also Like

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Developing Software to Manage Distributed Energy Systems at Scale

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Functional programming techniques can make software more composable, reliable, and testable. For systems at scale, trade-offs in edge vs. cloud computing can impact speed and security.

Héctor Veiga Ortiz and Natalie DellaMaria spoke about Tesla’s virtual power plant at QCon San Francisco 2022 and QCon Plus December 2022.

The Tesla Energy Platform is a microservices cloud architecture using functional programming, as Veiga Ortiz and DellaMaria explained:

Functional programming enables us to move fast while maintaining confidence in quality. Much of our application logic is built with pure functions. This enables us to rely on lightweight unit tests that are quick to run and give us confidence in our code without needing to stand up resource-heavy system or integration tests.

Strongly typed languages, like Scala, allow to model business logic into powerful types that can express use cases in a more readable and understandable manner, DellaMaria mentioned.

The immutability of variables reduces possible side-effects and results in fewer bugs and more readable code, as Veiga Ortiz explained:

For example, instead of throwing exceptions, which is an expensive operation because the runtime needs to collect information about the stack trace, we model errors using the type Either[A,B] where the type A represents the Error/Exception and type B represents your successful object returned from a computation.

We also use Option[T] to represent the existence (or not) of an object. When you combine these powerful simple types with category theory and effect libraries such as Cats, you can express complicated business logic in simple for-comprehension blocks, boosting productivity and ensuring your code is doing what you expect at compile time.

DellaMaria mentioned that when making decisions about cloud vs. edge computing, speed and security are often considered. Often it is quicker to iterate in the cloud layer before moving logic down to the edge, however, sometimes features make the most sense implemented locally on the device. DellaMaria mentioned that, as they are vertically integrated, they can release cloud-based features quickly, learn from them, and at any time choose to move that implementation down to the device.

InfoQ interviewed Héctor Veiga Ortiz and Natalie DellaMaria about the Tesla Energy Platform.

InfoQ: What purpose does the Tesla Energy Platform serve?

Héctor Veiga Ortiz and Natalie DellaMaria: The Tesla Energy Platform provides software services that enable real-time control of millions of IoT devices and support a variety of user experiences. Its main purpose is to abstract complexities from the devices, like telemetry collection or device control, into simple and usable primitives through APIs. Having a simple set of primitives opens the door to other applications to create experiences such as Storm Watch or Virtual Power Plants.

InfoQ: How does the architecture of the Tesla Energy Platform look?

Veiga Ortiz and DellaMaria: Applications within the Tesla Energy Platform fall into three logical domains: Asset and Authorization Management to manage device relationships and authorization models, Telemetry for the ingestion and exposure of real time data, and Control to enable smart controls, configuration updates and user features.

All these services run on Kubernetes deployments and expose their APIs through gRPC or HTTP ingresses. Most of our Kubernetes deployments use horizontal pod autoscalers to react to load changes and use the appropriate resources. Horizontal pod autoscalers and kubernetes cluster node autoscalers help us use the necessary amount of resources at any given time, and therefore maintain cost to the minimum required.

InfoQ: How do you trade off between edge and cloud computing?

Veiga Ortiz and DellaMaria: In the past 20 years, edge devices were considered low-powered machines only able to report data from the installed sensors. On the other side, server-side computing (either cloud or on-prem) was the sole point where processing of that reported data could happen. In the past years, newer devices have more resources to do computations both in terms of CPU and memory which is blurring the lines about where a computation should happen.

Another important aspect in these regards is cost: if you need to process more data in the cloud or on-prem, presumably you need to allocate more resources for it, increasing the overall cost. However, running the computation on the edge makes it virtually free, as you have already paid for the resources. This new paradigm opens new possibilities to create an even larger distributed system, where part of the processing now happens at the edge.

InfoQ: What do you expect the future will bring for energy cloud systems?

Veiga Ortiz and DellaMaria: Energy cloud systems will continue to grow, increase energy security and help accelerate the transition to renewable energy by intelligently controlling energy generation and storage. More and more features will be based on the devices and not in the cloud. We do think cloud systems will continue to be a critical component in supporting user experiences and providing relevant information to devices to enable them to make autonomous decisions.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Launches New Cognitive Speech Services Features to Accelerate Language Learning

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft recently launched new features for its Cognitive Speech Service to accelerate language learning with pronunciation assessment, new speech-to-text (STT) languages, and prebuilt and custom neural voice enhancements.

Microsoft Azure Cognitive Speech Services is a comprehensive collection of technologies and services such as Speech to Text, Text to Speech, custom neural voice (CNV) Conversation Transcription Service, Speaker Recognition, Speech Translation, Speech SDK, and Speech Device Development Kit (DDK) to accelerate speech incorporation into applications.

Pronunciation Assessment is a feature of Speech Service in the Azure Cognitive Services portfolio, publicly available in 10+ languages and variances, including American English, British English, Australian English, French, Spanish, and Chinese, with additional languages in preview. It utilizes Azure Neural Text-to-Speech and Transformer models, ordinal regression, and a hierarchical structure to improve the accuracy of the word-level assessment providing language learners of all backgrounds to improve their skills.


Source: https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866

In addition, the Azure Speech to text supports real-time language identification for multilingual language learning scenarios and helps human-human interaction with better understanding and readable context. This service’s new speech-to-text (STT) languages are based on vast amounts of data leveraging the latest multilingual modeling technology and transfer learning techniques providing output, which includes Inverse Text Normalization (ITN), capitalization (when appropriate), and automatic punctuation to enhance readability.

Lastly, Microsoft Azure AI provides a range of prebuilt neural voices for AI teachers, content read-aloud capabilities, and more. Custom Neural Voice (CNV) also enables users to create a unique, customized synthetic voice for their applications, using human speech samples as training data. CNV is based on neural text-to-speech technology and is excellent for representing brands and personifying machines for conversational interactions. Education companies are using this technology to personalize language learning, for example, Duolingo and Pearson.

Qinying Liao, a Principal Program Manager at Microsoft, stated in an Azure Tech community blog post:

Microsoft offers over 400 neural voices covering more than 140 languages and locales. With these Text-to-Speech voices, you can quickly add read-aloud functionality for a more accessible app design or give a voice to chatbots to provide a richer conversational experience to your users.

In general, Andy Beatman, a Senior Product Marketing Manager at Azure AI, said in an Azure AI blog post:

The integration of AI, specifically speech services, into the education sector is becoming increasingly important as it can greatly enhance the learning experience and improve the effectiveness of teaching. Speech services such as Azure Pronunciation Assessment and Custom Neural Voice provide personalization, automation, and analytics in education platforms, which can lead to better student engagement and achievement.

Lastly, more Azure Cognitive Speech Services details are available on the documentation landing page. Additionally, customers can use Speech Studio to test how custom speech features would help improve recognition for their audio.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Modern Mobile Development: Native vs Cross-Platform

MMS Founder
MMS Sebastiano Poggi

Article originally posted on InfoQ. Visit InfoQ

Transcript

Poggi: We’re going to be talking about native and cross-platform apps development. I am Sebastiano. I am an Android GDE, Google Developers Expert. I work at JetBrains. Let’s talk about the scope of today’s talk. The goal of this talk is to help you choose. How we’re going to do that is we’re going to be looking at the preconditions for your success. We’re going to be mentioning some things about mobile that you might or might not know. Then we’re going to be trying to understand how we choose between native and cross-platform. If we do choose cross-platform, then we’re going to try and understand how we pick the right cross-platform stack for you. As we then think, in engineering, everything has a big “It Depends” caveat attached to it.

Terminology

I really think it’s important to define the terminology for this talk. When I’m talking about a native app or a cross-platform app, what I’m really talking about is native app is something that uses the native build tools for each platform. For example, in the Android case, that would be using Kotlin, Java, C++, Gradle. In the iOS side, that would mean using Objective-C, Swift, and the Xcode build tools. On the cross-platform upside, the difference is that we’re talking about non-native build tools, although partially there might be some overlap. Potentially, cross-platform uses web technology even partially. The main point is that when we’re talking about cross-platform, we’re talking about something that works the same and uses the same technology on both Android and iOS. In neither case, we’re talking about something that runs in a browser that’s called a website, which is not to say it’s worse, it’s a different thing.

Company Dynamics

Let’s start by talking about company dynamics. We are all here because you want the mobile app. There’s two possible scenarios for which you might want the mobile app. The first one is you’re doing a greenfield project, so something entirely new. The other scenario is something where you’re rewriting an existing application code base that might be either in one go, so more similar to greenfield, or maybe you’re trying to progressively rewrite something that for whatever reason doesn’t really work for you. Maybe it’s because of tech debt. In the latter case, you will have a pre-existing team that wrote the initial or the previous incarnation of your app. Or maybe you have hired someone externally that did the work for you, in which case it’s a greenfield for you because nobody knows anything about the code base. The big rewrite generally happens because there’s been some degree of failure in the management of the project. Specifically, the application didn’t succeed or it doesn’t do what it needs to do.

The existence of a team that works on mobile is very important. In mobile, you might not think about it, but there are no such thing as full stack engineers. Mobile engineers tend to focus on their platform of choice and they don’t always like working on backend. Again, it all depends. There’s cases when they do and whatever. Generally speaking, in my personal anecdotal experience, backend people also don’t want/care about doing mobile. There are information silos and knowledge silos on both sides. The mobile team and the backend team don’t always work together. They might be forced to work together but they don’t work together. This can cause misalignment of goals, of backlogs, and also misunderstandings and a lot of other things that you really should be looking into when you’re facing a new or an existing app rewrite.

Then there’s something else that’s quite important. In particular, there’s a big common case in mobile, which is that there are two different chains of command and chain of reporting for mobile and for tech. This is especially in the big rewrite scenario. Where mobile reports to product, it’s seen as part of product, whereas backend and web, the architect, they report to the CTO. What this means is that there is a concrete risk that mobile is going to be nobody’s child, in a sense, because management doesn’t get it. It’s like, they’re maybe excellent product people, but they don’t understand technology in the same way a CTO would. On the other hand, because technology that is not mobile is under a different team, a different lead, a different VP, a different C-level executive, then the technology stack in the company is not built around mobile, is not built for mobile. That can be problematic, because the needs of mobile and web can be different. This leads us to also discussing the fact that, in almost all cases, especially for companies that have existed for a while, web has always been ahead. It tends to be ahead because mobile generally comes later. I’m not saying it’s necessarily an afterthought, but web is the easiest and most straightforward way to get something out there. I know this makes it sound like I’m trivializing doing web frontend. I’m not doing that. What I mean by that is that mobile is very complicated, has a lot of scope. Mobile can have wearables, you can have TV apps, you can have car applications. There’s a lot of things that go into mobile that is not just the phone, there’s a lot more to it. Compared to that, web is relatively straightforward. This complexity and this difficulty of dealing with mobile sometimes can exacerbate existing issues in your company, so you want to be ready for that.

When Things Go Wrong

What happens when things go wrong? Obviously, nobody wants things to go wrong, because nobody likes failing. Failing on a project can cause your bosses to be fairly unhappy and frustrated. That’s a shoddy feeling to feel. Nobody really wants your boss to be angry at themselves. Also, especially in some organizations, this can cause the start of blaming games where the mobile team says it’s the backend’s team fault. Backend team says it’s the mobile team’s fault. Maybe someone else says, “It’s the design team. They did a really bad job.” That’s not really useful to anyone, except for those that are trying to look not guilty, but they probably know deep down inside that it’s also their fault, because when there’s a failure, it’s everyone’s fault. It’s nobody’s in particular, but everyone’s. It needs to be considered as such. Unfortunately, what I have seen happening several times in my career in a mobile agency is that some management sees the change of tech stack as a way to shift future responsibility away from themselves. Like saying, the engineers told me that technology XYZ is going to fix a lot of problems, and we’re going to go for it. Then when inevitably that doesn’t help, then it’s the dev’s fault, “They lied to me.” Which is not particularly helpful either in the long term because you fail anyway. This is the wrong choices that you make for the wrong reasons. It’s very important to be honest with yourself and honest with your team and talking with the rest of your team, the company, and understanding why things went wrong if they did go wrong in the first place.

Because bad apps exist, this isn’t obviously all good and dandy, but then things still happen that we wish they didn’t, but they do anyway. The reason why most bad apps exist, in my opinion, is that there were some bad choices behind them. In particular, this can happen that someone decided to force some choice, made some bad assumptions. A very common bad assumption that I’ve seen made is that the existing technology stack in the company will work for mobile just as well as it works for web. That’s not always the case. You might have a very nice setup for your web backend, but it doesn’t work necessarily well for mobile, because the way people use mobile is different. A website and a mobile app, they have the same goal, but they are fundamentally different in the scope that they have and in the way that they are made. You will need probably to reach outside of your comfort zone and find the best solution for you for your specific needs. This is not always easy. It can be very bad if you do not have management buy-in. Make sure that when you do make a choice, you have higher-ups’ buy-in and they will not leave you stranded with whatever you choose.

Lastly, and I want to close this section with a very important consideration, users do not care about the technology stack you use. They couldn’t care less. Unless they are nerds like me that are very interested in this thing. The vast majority of people don’t know and don’t care about the choice you make in terms of tech, they just really want to get stuff done. They have an idea. They want to do something. They want to do something with you, and the technology doesn’t help them, it can only get in their way. Your goal as a company would be to help them so that you can transitively help yourself and your business.

(Re)starting

We’ve talked about all this. We’ve made this introduction. Let’s say, yes, it is the time you want to start or restart work on a mobile application. Before you do, there’s really a lot of caveats of things you need to pay attention to, the most important one to me is you need to ask yourself the tough questions. The tough questions when it comes to mobile app are some things like, do users want or need a mobile app? If they don’t, you’re wasting time and money. There’s no point in making an app nobody’s going to be using. Then if you’re unsure, think, can you help your users get to what they want, do what they want, with a high-quality website that is responsive, that works well on mobile? If the answer is yes, then probably you don’t need a native app or a cross-platform app. Then, does your competition have an app? Looking around is always good. You are not in the market alone, you’re with competitors. Do they have an app? Do their users use their app? Is there a good app? How good is it? If you’re trying to enter a market where everyone has an app or some of your competitors have an app, but that app is mediocre, then you have a relatively low bar to clear for your app to make sense. If you’re entering a market where your competitors have a high-quality app, people are more likely to be using your competitor’s app if your app is not as good as theirs. This is very important.

Once you have asked the tough questions, then you really need to take some decisions, make some decisions. The best way to do that is to use data to drive those decisions, because gut feeling might be bad for you because your gut feeling might be mostly related to your local market. We are living in a bubble in a sense as tech people, so trust indeed is very important. You should run focus groups. You should run user studies, market research, competitive analysis. All those things are very useful because they give you an insight into what’s out there, what people think, what they perceive. Once you have the data, you need to trust it. This also includes treating the data with respect. Respecting what the data says, not twisting data to say what you wish it said. You need to trust the data, even when it says something you don’t like. Because if the data says, nobody cares about a mobile app for your product, then don’t. It’s fine. Maybe do the website. Maybe don’t do that either. It really depends on your company, your users, your products.

When you realize that you need to do like your data says, sure, go for it. App, it is. Then you need to understand, who is in charge of the application? Is it the CTO? Is it the CPO? Is it a tech thing? Is it a product thing? My personal recommendation based on experience is that you should really try to align the mobile team with the rest of tech. That means probably the mobile app should be under the CTO not under the CPO, because you do want the various tech teams to work as closely together as you can. Not just having turf wars where the product people blame the tech people and the tech people blame the product people, and then they don’t understand each other and they have different needs and different backlogs. Avoid that if you can. Always think about your users. You want to help your users do what they want to help them help your business by using your services, making a purchase from you. What are your users trying to do? That is the fundamental important question when you’re doing anything, but specifically when you’re thinking about mobile. Because the interaction, the usage that your users will do on your app might be fundamentally different from what you think, from what you’re used to from the mobile, from your website. It’s very important to understand this. Because this, in turn, allows you to define what is the scope of the application, and most importantly, what is not in the scope of your application. Throwing the baby with the kitchen sink is not going to help. Mobile is complicated. Mobile is expensive. You need to focus your efforts to make sure that it works the way you intend it to work.

It’s very important to talk about capability versus capacity. There are two different ways to look at this. Capability is, is your existing team able to work on a mobile app? Versus, I have five people, I’m going to throw them at this mobile thing. What can your existing teams do if you have any? Do you have any mobile devs that know the native platforms? If you decide to go with cross-platform, are they ok with that? Are they going to do it? Remember, the current mobile market, the current job market for mobile engineers is very active, alluring. It’s very easy for someone else to just leave and then leave you stranded. It’s very important that you consider these things. Also, if you do not have a pre-existing mobile team that knows the platforms, you really need to ensure that you have someone, at least a couple of people, one per platform, that know the platforms very well. You are going to need it at some point. You might not need it on day one, you might not need it on day 10, but you will need it. Because at some point, even if you do cross-platform, you will end up interacting with the underlying platform. If your people have no clue how that works, you’re going to be stuck at some point. Please remember, you always need at least one person that knows Android and one person that knows iOS, or maybe one person, if you find a rare developer that knows them both fairly well, then that also is fine. Then you have a bottleneck of just one person that can take care of everything related to platform-specific stuff.

When we’re talking about this, why do you need this? You need this because as I said, your application, even if you’re writing JavaScript React Native, you will eventually need to go and interact with the operating system. Most mobile apps require this. If you’re going to write an application that does not talk to the operating system, then maybe it shouldn’t be an app, maybe it should just be a website. Because it’s going to be faster, it’s going to be cheaper. You probably already have the knowledge in your company to do this. The so-called website apps, so essentially an app that has a listing on a native app store, such as the Apple App Store or Google Play Store, but they are just a WebView, those are useless. Nobody wants them. If your application brings something to the table, then great, but if it doesn’t, there’s no point in making one. Let’s consider an example. You have an existing ReactJS development team, and you want them to do mobile. This is potentially problematic, because the technology and the tools that they will be using are somewhat different. Sure, there is a vast amount of concepts, especially that are shared between ReactJS and mobile. As soon as you need to do something custom, you need someone that knows how to do that custom something. Native knowledge is again required to do some things. Even if you have a third-party vendor that has a native SDK that you want to integrate with, you will need someone that knows how to do that. You will need someone that knows how to write Kotlin. You will need someone that knows how to write Swift, that can write that integration for you if one isn’t available, or if the third-party ones that generally you have to use are not very good, which can happen.

Once you have considered what are your team capabilities, you want to involve them in the process. You want your devs to be involved in the choice, because, again, they’re the ones that are going to have to work with that, so it makes sense. Listen to what their fears are. It’s not uncommon if you have pre-existing native teams, for the people to feel threatened by the adoption or potential adoption of cross-platform frameworks. Because if everything goes well, then you don’t need two full teams, you need two persons, one per platform, plus one team, maybe slightly larger than any of the two single ones. That means people are going to lose their job, if you move from native to cross-platform, most likely. They know that. Listen to them, and reassure them. Make it so that they feel safe in your company. Because otherwise, again, they will leave and you will be left without a team that can work on your product. That’s not good. When talking with your team, it’s important as always, to remember not to chase fads. Developers like ourselves, we like shiny things, we all know that. Your team will be no exception. They will have their favorite technology, and their favorite technology might not be the best one for you. You need to talk to them and make sure that the reason that they might be suggesting to adopt one or the other, if they have any suggestions, is not because they are fanboying Google, for example. It’s very important that you choose technology because of its merits and not because of someone’s personal preferences. Spikes are a good way to test the water, so putting a time box and a couple of people working on a demo that you can do with a specific framework. Remember that spikes can be very deceiving because people tend to choose things they like and things that are easy. Try to orient your spikes towards a very vertical slice of things, not just developing the UI, because doing just the UI is definitely easier than doing things like storage, caching, networking, database, all that stuff.

Once you commit, remember, you’re in for the long run. It is a big investment that you’re making, regardless of whether you’re going native or cross-platform. There will be huge switching costs if things go wrong. There is in fact a certain amount of lock-in, fairly high, both in terms of the technology you use and the skills of the people that you will have to have to work on those things. There is a very high chance that if you make the wrong choice and you need to change tech, you will have to rewrite everything.

Native or Cross-Platform?

With all the caveats out of the way, it is now time to talk native or cross-platform. Native is always better in some sorts of sense. It has better performances than any cross-platform framework. It will have better integration with the operating system, with third-party libraries, and better support from the community at large, because they’re always bigger. Native is more consistent with the operating system because it’s native. That’s the whole point. You will have access to more APIs and features like wearables, like tvOS. If those are things you care about, then native might be the right way for you to go. The tooling on native is constantly improving, even Xcode is getting better. Now they even have rename for variables in Xcode, mind-blowing. Not everything is great, obviously, otherwise there wouldn’t be cross-platform frameworks.

Native is more expensive. You need a dedicated team per each operating system. There are infrastructure and process considerations that you need to consider. For example, you will most likely have to have different CI setups for Android and for iOS. This might also happen in the case of cross-platform, because the tooling between the two different platforms is different. Also, beyond that, deploying and publishing your application to the stores, the Apple App Store, or the Google Play Store, whatever other stores on Android you want to publish to, they are different. They work in different ways. There are different timings, expectations. Consider that.

In some cases, going cross-platform might be the most pragmatic choice for you, for your specific case. Cross-platform might be enough. By that, I don’t mean that cross-platform is worse than mobile. I mean that what cross-platform is better at might be all that you care about. In that case, it makes sense because cross-platform has vastly improved over the years. It does have some advantages for developers. For example, Flutter and React Native offer hot reload, which is great, because you can test your changes like you do on the web, without having to redeploy the application to the device or virtual device. If you do go with cross-platform, my personal recommendation is to make sure that you have a very strong design language that is not looking like Android or iOS, because if it looks too much like either platform, it will be looking out of place on the other one. At the same time, it’s very hard to mimic the native look and feel of a platform without using native controls. You can get maybe 80% of the way there easily, then the remaining 20% is very hard. There’s a very big uncanny valley there. Try to do something like Duolingo does, if you do cross-platform, where they have a very unique design language. Duolingo is a native app, but if you do cross-platform, that’s a good example of how you could do it.

A Fictional App Case Study (Wearables Company)

Let’s look at a case study. There’s a wearables company. They’re wondering, do we need an application? They have the need for a companion app, because their point is we have a smartwatch, like the one in the picture. We want to be able to communicate with it, to download data, to show historical trends. Yes, you need an app, you cannot do this from a website. Maybe you could, but you really, probably shouldn’t. The next question that they should ask themselves is, do they need a native application? Are they using the operating system APIs heavily? Because this is a big point for using native code. The answer is yes, they’re going to be using it a lot, because they’re going to be having to do a lot of work in the background. They’re going to be having to do all the pairing stuff with the device. If they’re only using Bluetooth, there might actually be a case where cross-platform, especially Flutter, can do most of the stuff. If you go on mobile native, then you probably have more control over what you do. You have more possibilities. Then the next question, again, extremely important, can users achieve their goals with a native app? In the case of a native app, users will always be able to achieve their goals if you do a native app, because native app is the gold standard. For as much as cross-platform frameworks are good, native remains the gold standard, because that’s where features originate from. That’s also a yes. This means that, yes, they do need a native app.

The Main Choices: React Native, Xamarin, and Flutter

I’ve been talking about native and cross-platform, but when I say cross-platform, what do I mean? The main three choices I see today for someone doing cross-platform, are React Native, Xamarin, and Flutter. There are also web-based frameworks. An example of those would be Cordova, Ionic, and PhoneGap, and others. I would be remiss if I didn’t mention that also exists Kotlin multi-platform, which, as you might know, Kotlin is a language that is designed by JetBrains, and worked on with other companies as well. In terms of the web-based, cross-platform frameworks, I would probably avoid them, if at all possible, because they tend to be fairly old and not very good at all. For what concerns Kotlin multi-platform, it’s very different from React Native, Xamarin, and Flutter, because right now, Kotlin multi-platform is focused on sharing business logic more than UI. For the scope of today’s talk, let’s ignore them, and focus on React Native, Xamarin, and Flutter.

React Native is a technology created by Facebook. It’s based on ReactJS, also made from Facebook. The good thing about this is that if you have a strong ReactJS presence in your company, you can probably share some of the skills and some of the code with the web team. Because just like ReactJS, React Native is built on JavaScript and npm. One important thing to mention about React Native is that there is third-party support for desktop, and wearables, and TV, to different degrees of experimentality. If your focus is those things, I would not use React Native just for that, because it’s all very immature right now, or at least that’s the feeling I get. React Native is something mature enough and powerful enough that you can make B2C apps with it. There are, in fact, a lot of big React Native apps out there. Apart from parts of the apps from Facebook, and Instagram, there’s also third parties that use React Native, and there’s quite a few of them. There are some limitations for what performance is concerned, but they have been improving upon those in the past few years. Especially when it comes to animation, if you’re going to do something very animation heavy, I wouldn’t do React Native for that. Also, need to remember, if you do need to do custom UI components, you need to create per-platform implementation, which is tedious. At the same time, it’s how you get a very native feeling UI at the end of the day from React Native, is because it’s using native stuff. There are famous cases of apps having abandoned React Native. Most of all, I think everyone knows about Airbnb a few years back. It doesn’t mean it needs to be your case, but just also a data point to consider.

Xamarin instead is a very different beast. It’s created by Microsoft. It used to be a paid product, now it’s free, and it is open source. If your company is very big on C#, maybe you have ASP.NET, if you are very big on C#, then this will give you the ability to have a full stack .NET implementation from the backend to all the frontends. Xamarin has a fairly unique UI approach in the sense that you can either go multi-platform with Xamarin.Forms, or you can use native views somewhat like React Native does. The latter, I don’t think is used that much. I cannot really find much information about it. That probably tells you something already. Xamarin, contrary to React Native and Flutter, tends to be very close to the source operating system API. You need to know the operating systems anyway. Because what they do is they automatically wrap the existing APIs from the Native SDKs, and they expose them in C#. There is somewhat limited support in terms of third parties, there’s not that many that have native Xamarin libraries. You can generate bindings, but your mileage may vary. Tooling is somewhat limited. In my personal opinion, Xamarin is probably best for internal and somewhat unsophisticated apps. It is very enterprise oriented. Maybe it’s not that great to do B2C apps, probably don’t want to do Xamarin for that.

Lastly, Flutter, it’s from Google. It is very quickly rising in popularity. There are a lot of investments and marketing that Google is putting behind Flutter, and that may be why it’s so quickly rising in popularity. There are a lot of very good first and somewhat second party integrations. For example, as you would expect, Firebase, things that are from Google, are very well integrated with Flutter, for example. Flutter uses Dart as a language and the Pub ecosystem for dependencies. You can technically do almost everything with Flutter. You can do mobile. You can do desktop. They are specifically targeting macOS, Windows, and Linux. You can do web. You can also do embedded IoT. Essentially, anywhere you can run an Android APK, you will be able to run a Flutter app. There is no support for WatchOS and tvOS as far as I can tell right now. If you really want to, if you find someone that will do that for you, if you have a lot of people that do Dart in your company on the backend already, for whatever reason, then you can also do UI with Dart. That’s Flutter. Flutter has best-in-class testing capabilities. Important to note, the developer audience of Flutter is heavily skewed towards Android, probably because it is a Google thing.

To give you a sense of scale. These are the numbers from app figures in terms of how many apps on the Android Google Play Store and on the Apple iOS App Store are using different cross-platform frameworks. You can see that, surprisingly to me, Cordova is 17% on iOS and 20% on Android. React Native is the second, somewhat 5% less in terms of usage. Flutter comes after that. You can see the very strong skew here towards Android. Ionic and Xamarin are much lower still. Then there’s other things like the recently obsoleted, abandoned Titanium/Appcelerator framework. Those numbers are deceiving because those numbers don’t tell you that if you ask how many mobile developers are using cross-platform, it is about a third of them. Consider that when you’re building a cross-platform team, you probably have to tap two-thirds less of the work market out there. Also, if you look at historical data, the numbers we saw are different. You can see all of the web-based frameworks and somewhat also React Native and Xamarin. The amount of people over the last three years that have said that they’re using those technologies has gone down. React Native is not that much down, but others are very much going down. On the other hand, you have a situation like Flutter that’s going up. Looks like Flutter is doing particularly well, there’s 42% of mobile apps that have used it. Kotlin multi-platform is going up.

Fictional Fintech Case Study

Here’s another fictional case study to wrap things up. You have an investment company, it’s a fintech. First consideration, do they need an app? The answer is yes. Their competitors all have an app, so they do need an app. Do they need a native app though? First of all, let’s look at the same criteria we looked at earlier. Are they using the OS APIs heavily? Not really. Most will be using push notifications, but that’s somewhat of a solved problem. No, they’re not. Can their users achieve their goals? Assuming their goals are that they want to look at their stocks and maybe buy some more and track them and get notified, then, yes, they don’t need a native app to achieve those goals. You can do that with any cross-platform framework. They probably don’t need a native app. We are going cross-platform, which framework? First of all, the company in question does not have a ReactJS team, for whatever reason, maybe they use Angular. They do not have a strong .NET team in-house either. That also rules out Xamarin because it makes no point to use Xamarin if you don’t already have .NET in your company. They are using Firebase services quite heavily. Lastly, they are planning to do a lot of custom UI, because all those graphs, all the nice animations, there’s a very B2C thing. Given these elements, probably the best solution is to use Flutter.

Testing on Mobile

Last thing, testing on mobile. Remember, unit testing, a solved problem on mobile. Not that big of a deal. UI testing is where the problems are. Flutter is the framework with the best solutions for when it comes to UI testing, because you can run so-called widget tests that are offline UI tests that are very useful, especially on the CI. Native and React Native will end up using per platform tools. Xamarin as well, but I seem to remember they are custom tools still per-platform. When you want to run UI tests on CI, you will be running instrumented tests that are slow. They need a device, physical or emulated, simulated. There are cloud services to have device farms that you can run your UI tests on, but they’re really expensive. They are complex to set up and to maintain. The classic workaround is to do more unit testing, especially on Android. You have things like Robolectric that allow you to run what would otherwise be integration tests as if they were unit tests. There’s also a market for specialized CI solutions for mobile projects, like Bitrise, and things like that.

Review Decisions

We have made a choice. We are working on our app. It’s time to keep an eye on what we have done and what we achieved because bad choices will not be clear right away. For example, you’ve seen some React Native apps have abandoned React Native after a few years. Flutter might turn out being the same down the line. We don’t know. It’s important that we remember to keep an eye on the progress of things. Because, yes, failure will be expensive but sunken cost bias will be worse. Stopping early will help you contain the damage in a way.

Takeaways & Resources

First of all, understand if mobile can work for you. Use data. Second of all, make sure your company can do mobile, sort out the organization, sort out the team responsibilities and whatnot. Then, when it’s about to make a choice, assess the compromises, trying to understand that there is no silver bullet and every company is different. Lastly, make the right choice and go. You can find some more information and links at go.sebastiano.dev/qcon-2022.

Questions and Answers

Mezzalira: You discussed very well about different types of implementations that are valid. In your experience, which one would you choose 99% of the time?

Poggi: Personally, I do native development. That’s the most obvious choice for me. If I found myself in the situation of having to choose across platform framework, I would probably use Flutter mostly because it’s the one I have some experience with. Dart is not a language I’m particularly skilled in but it’s close enough to Java that it’s easy to understand. As a second choice, I will probably choose React Native, but with the caveat that I don’t know anything for JavaScript, so that might be a bit complicated. The knowledge gap would be bigger. If you understand how declarative, reactive UI frameworks work, using one or the other is mostly a matter of getting used to the specific semantics and language that you use in one framework versus the other.

Mezzalira: I tend to agree, but obviously then there is a favorite language that kicks in.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Migrating From Enzyme to React Testing Library – Sentry Case Study

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

The Sentry engineering team recently recounted on its blog the drivers and lessons learned from migrating its front-end tests from Enzyme to the React Testing Library. The migration was triggered by Enzyme’s lack of support for newer versions of React. The migration took about 20 months and involved 17 engineers reviewing around 5,000 tests.

Sentry’s engineering times decided several times against migrating their test base to React Testing Library (RTL) for lack of substantial-enough benefits. The team recalled:

We don’t just throw things in because they’re new. We carefully evaluate new technologies to understand what benefits they bring to our team. RTL was known to us back then, but we didn’t have strong arguments about why we should bring it into our codebase. Enzyme, the library we used to test our component library, was still attending to our needs.

On the one hand, Sentry was already engaged in a large migration to TypeScript which, together with regular product work, was keeping the engineering team busy.

On the other hand, Enzyme tests often took very long and the team had a strong interest in improving test speed.

Enzyme Tests Performance
(Source: Sentry’s engineering blog)

A proof of concept showed a 12% performance improvement, which was deemed insufficient to embark on yet another long migration project. The proof of concept nonetheless proved that RTL had observable advantages over Enzyme. As the team reports, Enzyme did not test for accessibility; did not automatically clean up the test environment; and often directly accessed the component under test’s state. RTL, on the other hand, is closer to integration testing and strives to test application use cases from the user’s perspective. In particular, RTL strives to avoid testing implementation details. Implementation changes should only break a test if they indeed introduced a bug.

The tradeoff analysis changed after Sentry completed its migration to TypeScript and started to upgrade to React 17 (which includes React Hooks). The team recalls:

The [RTL] migration still didn’t get much attention until we worked on updating React to version 17. The React core team had completely rewritten the library’s internals and Enzyme directly used a number of internal React functionality.
[…] Enzyme didn’t work 100% with this new version of React, but there was an adapter on the market that worked around this problem and that’s what we used. However, this solution wouldn’t work in the long run as React 18 would require a complete rewrite, which was unlikely to happen given that Airbnb had dropped support for Enzyme.
[…] RTL does not rely on React’s internals. and would continue to work the same with React 18 as it did with 16 and 17.

Once the green light was given, the focus switched to minimizing the risks of the migration project (engineering estimates under various hypotheses, iterative approach vs. big-bang migration, progress tracking, RTL training, surfacing of best practices, daily code reviews, and more).

The migration was completed after 18 months (vs. an estimated 14 months). It allowed the team to remove obsolete tests, improve accessibility – a previously overlooked aspect, and write tests based on use cases instead of implementation details.

The team detailed unexpected performance issues encountered when following some RTL recommendations to the letter (e.g, in addition to mocking web APIs , mock the user as much and as realistically as possible). In spite of not seeing dramatic improvements on the test performance front (the primary pain point that drove the interest for the initial proof of concept), the team positively concluded:

Although the performance of our tests has not improved as we had hoped, the introduction of the library has brought many other benefits, such as tests that no longer rely on implementation details, but instead test what the user sees and interacts with. And ultimately, that’s what matters.

In the original article, the Sentry team discusses at length interesting technical details accompanied with quantitative data and qualitative illustrations that may be of interest to other engineering teams. Interested developers are invited to refer to it for further details.

React Testing Library is one of a suite of user-interface testing libraries (e.g. DOM Testing Library, Vue Testing Library, Svelte Testing Library, Puppeteer Testing Library) with a similar driving philosophy:

You want your test base to be maintainable in the long run so refactors of your components (changes to implementation but not functionality) don’t break your tests and slow you and your team down. The DOM Testing Library [provides utilities to] query the DOM for nodes in a way that’s similar to how the user finds elements on the page […] The more your tests resemble the way your software is used, the more confidence they can give you.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DIC Recruitment 2023: 22 Vacancies, Check Posts, Qualification and Other Details

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

DIC Recruitment 2023: 22 Vacancies, Check Posts, Qualification and Other Details

DIC Recruitment 2023: Digital India Corporation (DIC) is under the Ministry of Electronics & Information Technology, Government of India inviting applications from qualified candidates to fill up 22 vacancies for the posts of Business Analyst, Developer/Sr. Developer, Technical Support Executive, and On-Boarding- (Manager/ Senior Manager). According to the official notification of DIC recruitment 2023, the applications will be based on qualifications, age academic record and relevant experience. Shortlisted candidates will be called for an interview.

As given in the official notification of DIC recruitment 2023, interested and eligible candidates can apply online through the official website of DIC. For more details about DIC recruitment 2023, candidates can download the details from the official website of DIC.

Post Name and No. of Vacancy for DIC Recruitment 2023:

In accordance with the official notification of DIC recruitment 2023, applications are invited for the posts of Business Analyst, Developer/Sr. Developer, Technical Support Executive, and On-Boarding- (Manager/ Senior Manager). There are a total of 22 vacancies for the given posts.

DIC RECRUITMENT 2023-

Qualification for DIC Recruitment 2023:

According to the official notification of DIC recruitment 2023, the qualification and experience required for every post is given below-

For Business Analyst-

Graduation /B.E/ B. Tech and equivalent.
Qualification can be relaxed in the case of exceptional candidates.

Experience-

  • 2+ years’ experience in Software/Networking projects.
  • Management/Operations, Enterprise-Wide systems integration/implementation projects.
  • MS Office, MS Project, JIRA, etc.
  • Good Communication skills (Oral and Written).

For Developer/Sr. Developer-

B.E/B.TECH/MCA or any Equivalent Degree with excellent analytical and software development skills.

Experience-
0-7 years of proven software development experience in IT

  • Excellent in PHP, Python Scripting, and NoSql database.
  • Experience and working hands-on AWS stack (S3, EC2, ECS, etc).
  • Database programming using any flavor of NoSQL and SQL databases.
  • Exposure across all the SDLC processes, including testing and deployment.
  • Characteristics of a forward thinker and self-starter.
  • Ability to work across multiple projects.
  • Passion for educating, training, designing, and building end-to-end systems for a diverse and challenging set of customers to success.
  • Good to have knowledge of Spark, Airflow, ELK, Kubernetes, and Kafka.

For Technical Support Executive-

Graduate in Law/ English/ Journalism/ Economics/ Commerce/ any stream from a recognized University.

Experience-

  • 0-3 years of experience with zeal to handle customer queries and resolve in time bound manner.
  • Familiarity with MS office, reporting, and documentation.

ForOn-Boarding- (Manager/ Senior Manager)-

Any Graduate with relevant years of experience.

Experience-

  • Excellent communication skills both verbal and written.
  • 2-10 years of experience (preferable) having Corporate Exposure and dealing with senior management.
  • Liaising with Media, Corporate Houses, and Government departments.
  • Sound technical knowledge of web and mobile software concepts.
  • Proficiency in drafting presentations, business papers, white papers, technical documents, and manuals.
  • Ability to multitask and manage multiple priorities and commitments concurrently.
  • Commitment to the organization’s goals and values.

How to Apply for DIC Recruitment 2023:

As per the official notification of DIC recruitment 2023 notification, interested and eligible candidates can apply online through the official website of DIC. For more details about the DIC recruitment 2023, candidates can visit the official website.

Join StudyCafe Membership. For More details about Membership Click Join Membership Button

Join Membership

In case of any Doubt regarding Membership you can mail us at [email protected]

Join Studycafe’s WhatsApp Group or Telegram Channel for Latest Updates on Government Job, Sarkari Naukri, Private Jobs, Income Tax, GST, Companies Act, Judgements and CA, CS, ICWA, and MUCH MORE!”

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Writing Cloud Native Network Functions (CNFs): One Concern Per Container

MMS Founder
MMS W. Watson

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • The Docker and Kubernetes documentation both promote the concept of packaging one application or “one concern” per container. This can also be a guideline for running “one process type” per application and container.
  • Telecommunication-based Cloud Native Network Functions (CNFs) have specific requirements, such as low latency, high throughput, and resilience, which incentivize a multi-concern/multi-process-type approach to containerization.
  • There are seven benefits to having “one concern, one process type” applications packaged within a container, and they are lost when tightly coupling process types.
  • Having multiple process types exacerbates the cloud native “zombie” process and “signal (SIGTERM)” problems.
  • High-performance Telco applications that are implemented with multiple process types should explore using unix domain sockets for communication instead of using TCP or HTTP, as this can speed communication between containers.

There is value in defining both thick and thin definitions for microservices. Thick microservices would be any code that harnesses Conway’s law and deploys code by the boundaries of a product team. Thin microservices would be those services that adhere to a coarse-grained deployment of code, typically in a container, with only one concern.

Cloud native network functions (CNFs) are network applications in the telecommunications space that have different non-functional requirements from most cloud native enterprise applications. CNFs are often stateful while requiring low latency, high throughput, and resilience. Any architecture that reduces or prohibits any of these requirements is either a bad fit for telecommunications development or requires a special exception in implementation. This is where the challenge arises for the thin microservice model, which promotes a “one concern, one process” design for containers and CNFs.

One Concern for Containers

The google cloud documentation, docker documentation, and Kubernetes documentation all promote the concept of one application or one concern per container. While the google cloud documentation uses the term “application,” the docker documentation uses the term “concern” and further describes a concern as a set of parent/child processes that are one aspect of your application. A good example of this would be an implementation of nginx, which will create a set of child worker processes on startup. Another way to understand the one concern rule is to say that only one process type (such as the set of nginx worker processes) should exist within a container.

Why does this rule exist? While at first thought the rationale behind this rule would be to reduce complexity within a single module, component, object, etc., the real driver behind this rule is adherence to the code’s rate of change[1], a concept borrowed from traditional architecture and biology. An artifact should be deployed at a rate consistent with how often it changes. The cloud native way is to give maximum effort to decoupling code to make this the case. The pushback against decoupling is often driven by the need for performance optimizations, which we will return to later.
 
The telecommunications industry and other industries have a history of development in isolation. In other words, within the telecommunications industry, code, libraries for code, and code deployment have been developed within one large organization. Even when multiple sub-organizations were jointly involved in developing a large project, such as a commercial grade switch, the deployment of such libraries, projects, and end products were funneled together and deployed in a lock-step fashion. Given this history, which is problematic for even the thick definition of microservices referred to earlier, it shouldn’t be surprising that network functions have even more difficulty adhering to the thin definition of microservices and the one concern rule.

The Seven Benefits of One Concern, One Process Type

Tom Donohue illustrates good benefits for the one concern principle rephrased here:

  • Isolation: When processes use the container namespace system, they are protected from interfering with one another.
  • Scalability: Scaling one process or process type is easier to reason about than scaling multiple types. This could be for complexity reasons (one process type is harder to scale than multiple process types) or because the rate of change is different (one process needs to be increased based on different conditions than other processes).
  • Testability: When a process is assumed to be running by itself, it can be tested in isolation from other processes. This allows developers to locate the root cause of problems easier by eliminating extra variables.
  • Deployability: When a processes’ binary and dependencies are deployed in a container, the deployment is coarse grained relative to the rate of change of the binary and the container, but fine grained relative to the rate of change of other processes and their dependencies. This makes deployments adjustable to where and when a change happens in your dependency tree instead of redeploying everything in lockstep.
  • Composability: One concern, and therefore process type, per container is much easier to reason about because it is easier to share and verbally communicate about its contents digitally. This makes it easier to reuse in other projects.
  • Telemetry: It is easier to reason about the log messages that come from one concern or process type than log messages that are interleaved with other concerns. This is even more true in a container that prints all log messages to standard out, such as in a 12-factor cloud native app.
  • Orchestration: If you have more than one process type in a container, you will have to manage the lifecycle of the secondary concerns within the container, which effectively means creating your orchestrator within the parent process type.

The impact of the open-source cloud native movement on the telecommunications industry is an explosion of collaboration between vendors. Instead of developing tightly coupled software under the umbrella of one organization, the call for more collaboration and interoperability has driven multiple projects from different organizations to revisit the benefits of the one concern principle.

Cloud Native Best Practices for Processes

Processes order independence, a devil’s bargain

One of the arguments for putting multiple process types in the same container is the need for more control of the startup order of concerns. An example of this would be a traditional application that needs a database. The application and web server may fail to start properly if the database isn’t available first, so someone may manually have the database start first in a docker file and then have the application start. While this does work, you lose the seven benefits of having your concerns loosely coupled when doing this. A better approach is to have order independence for your concerns and process types wherever possible.

Your process will be terminated

Kubernetes has settings for the priorities of pods that allows users to preempt or terminate pods if a set of conditions are not met. This means pods need to be responsive to the graceful shutdown request of these scheduling policies or be exposed to data corruption and other errors. These graceful shutdown requests come in the form of a SIGTERM request, which typically gives 30 seconds before a SIGKILL request forcefully terminates the process. When running multiple processes, all of the child processes need to be able to handle graceful shutdown signals. As we will see later, handling graceful shutdowns for processes causes subtle problems that are made worse with multiple processes.

In telecommunications, process order independence and preemption have been typically handled by orchestrators and supervisors that have been tightly coupled to the processes they manage. With application-agnostic orchestrators like Kubernetes, the days of these custom and tightly coupled orchestrators are coming to an end since declarative scheduling configuration is now possible. A telecommunications cloud native approach should probably resemble the Erlang community’s “let it fail” approach to processes, where the calling process is more robust concerning the processes it calls.

Multiple processes and the application lifecycle

Google cloud recommends that you package a single “app” per container. At a more technical level, a single application is defined as a single-parent process with potentially multiple child processes. A major part of this rationale is harnessing the different rates of change in the application’s lifecycle. What do we mean by lifecycle? The lifecycle is the starting, executing, and termination of an application. Any process with different reasons for starting, executing, or termination should be separated from (i.e., not tightly coupled with) other processes. When we disentangle these concerns, we can express them as separate health checks, policies, and deployment configurations. We can then declaratively express these concerns, track them in source control, and have them semantically version controlled. This allows us to avoid upgrading things in lockstep, which would pin separate lifecycles together.

The problem of managing the lifecycles of multiple applications, or process types, in a container stems from the fact that they all have different states. For instance, if you have a parent process that starts Apache and then also starts Redis, the parent process needs to know how and when to start, monitor, and terminate both apache and redis. This problem is exceedingly more difficult with code or binaries that you don’t control since you don’t control how those applications express their health. This is why the best place to express process health (especially of a process that you don’t control) is within a configuration that is exposed to a container management system or orchestrator like K8s (Kubernetes), which is designed to accommodate lifecycles, and not within a makeshift bash script.

Multiple processes exacerbate the cloud native signal and zombie problems

Not handling what is known as the PID 1 process in a container is rife with insidious problems which are extremely hard to detect. These problems are exacerbated when there are multiple processes involved. The two main issues with properly handling PID 1 are handling termination signals and zombies.

SIGTERM

All applications and processes must be aware of the two types of shutdowns: graceful shutdowns and immediate shutdowns. Suppose a stateful application expects to open an important file, write data, and close that file, all while being uninterrupted. In that case, that application will eventually corrupt that file given the preemptive capabilities of K8s. One way to handle this type of problem is to have graceful shutdowns. This is what a SIGTERM signal does. It tells the application that it is going to be shut down and to start taking action to gracefully avoid corruption or other errors. Within orchestrated systems, all processes should be designed to handle a graceful shutdown if needed. But what about processes that start other processes? To handle the graceful termination of child processes, a parent process needs to pass on the SIGTERM signal to all the children so that they can, in turn, gracefully shutdown as well. This is where inappropriately handling PID 1 is a problem. Simple scripts like bash won’t pass the SIGTERM signal on to processes they start unless told to do so explicitly. Without this passing on of SIGTERM, very hard-to-detect errors will pop up.

An insidious SIGTERM error example

Some insidious errors that are caused by having multiple processes have been documented by Gitlab. They had an issue where a 502 error would appear on a page but would mysteriously fix itself after a certain amount of time. This problem was because the aforementioned graceful termination signal (SIGTERM) was not being sent to child processes, which had open connections running after the page-serving resources were already removed. This problem was notoriously difficult to track down.

Zombies

The PID 1 process in a container also cleans up child processes after they terminate. This may seem simple enough, but by default, a PID 1 bash script will not do the proper cleanup. What are the implications of not cleaning up or reaping child processes? These unclean processes, also known as zombies, fill up what is known as the process table. They eventually prevent you from starting new processes, effectively stopping your whole node from functioning.

What do you do about this? One solution is to use a proper init system that is appropriate for your containers. This system would register the correct signal handles and pass those signals on to child processes. It also calls the “waitpid()” function on terminated child processes, removing them as zombies.

A proper init system handles zombies and signals

One way to limit the effect of zombie processes is to have a proper init system. This is especially true if you are thinking about running a PID 1 process that has code you don’t control, e.g., a Postgres database. This process could start other processes and then forget to reap them. With a proper init system, any child processes that terminate will eventually be reaped by the init system.

There are proper init systems and sophisticated supervisors that you can run inside of a container. Sophisticated supervisors are considered overkill because they take up too many resources and are sometimes too complicated. Some examples of sophisticated supervisors are supervisord, monit, and runit. Proper init systems are smaller than sophisticated supervisors and, therefore, suitable for containers. Some proper container init systems are tini, dumb-init, and s6-overlay.

Performance and Cloud Native Telco Processes

One of the main motivators for running multiple processes in containers is the desire for performance. It seems that running processes in separate containers instead of the same container (assuming the interprocess communication is the same) can decrease performance. This performance decrease can be attributed to the isolation and security measures built into the container system. It may also be removed by running the container in privileged mode, but this has the tradeoff of reduced security.

One misconception of separating processes into multiple containers is that any communication will take a performance hit because it will have to occur over TCP and, even worse, HTTP. This isn’t entirely true. You can retain the performance of multiple processes while separating them into different containers by using unix domain sockets for communication. This can be configured in Kubernetes by using a volume mount that is shared between all containers within a pod.

In the context of telecommunications, data planes require maximum performance between concerns and therefore use threading, shared memory, and interprocess communication for performance increases. This comes at the expense of increased complexity if those concerns are all tightly coupled. Interprocess communication implemented between separate containers but within the same pod should help here. Telecommunications control planes usually require less performance and, therefore, can be developed as traditional applications

Conclusion

To reap the maximum interoperability and upgradeability benefits of the cloud native ecosystem, the telecommunications industry will need to adhere to the one concern rule for containers and deployments. Vendors that can do this will enjoy a competitive advantage over those that can not.

To learn more about cloud native principles, join the CNCF’s cloud native network function working group. For information on CNCF’s CNF certification program, which verifies cloud native best practices in your network function.

Special thanks go to Denver Williams for his technical review of this article.

Endnotes

[1] “O’Neill’s A Hierarchical Concept of Ecosystems. O’Neill and his co-authors noted that ecosystems could be better understood by observing the rates of change of different components. Hummingbirds and flowers are quick, redwood trees slow, and whole redwood forests even slower. Most interaction is within the same pace level-hummingbirds, and flowers pay attention to each other, oblivious to redwoods, who are oblivious to them. Meanwhile, the forest is attentive to climate change but not to the hasty fate of individual trees.” Brand, Stewart. How Buildings Learn (p. 33). Penguin Publishing Group. Kindle Edition.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.