Month: January 2023
MMS • Stefania Chaplin
Article originally posted on InfoQ. Visit InfoQ
Transcript
Chaplin: I’m Stefania Chaplin, aka devstefops. I am a Solutions Architect at GitLab. I’m going to talk about securing microservices, preventing vulnerability traversal.
Outline
A little bit about me. Who am I? What are we here to talk about? I’ve got quite a famous example I’m going to talk you through, when talking about vulnerability traversal within microservices. Who is our audience? Thinking about the silo of the teams that we need to work with. How are we going to approach this? How do we fix this? What are some frameworks that we can use? I’m also going to look at some specific types of vulnerabilities. Most importantly, why are we here? Why are you listening? Why am I talking? Why are we looking to secure our microservices? A little summary, and then Q&A at the end.
Who Am I?
Who am I? I used to be a developer, mainly using Python, Java. I did a lot of work within REST APIs. I’m one of those cool people that I can just read JSON. That’s the way my mind works. If you need a particular value, I can tell you the order of square and squiggly brackets to get you there. Then I moved to the wonderful world of security, focusing within DevSecOps, AppSec, and also CloudSec, so securing our cloud and microservices. Now I work at GitLab, the DevOps platform. If you do want to talk to me about GitLab, I love talking about it. This talk is not a GitLab one, as you can see, devstefops.
What?
What are we here to talk about? I want to start a bit with a story. I say a story, it wasn’t that long ago, it was a couple of weeks before Christmas. I’m not sure if anyone remembers or came across, Log4Shell could be the worst software vulnerability ever. This headline was not alone, there were lots of equally strong headlines about this. Log4Shell was related to a component called Log4j, which is a Java component used as a login framework. It’s very popular. It is used in a lot of different types of applications. The thing about this vulnerability is there is a vulnerability scoring system from 1 to 10. This was a 10. This was as high as you can get. When we think about components, for example, Log4j, if anyone has come across xkcd before, I would recommend it. Really good. Very clever IT related and entertaining images. Here we have, this is August 2020, all modern digital infrastructure. Then at the bottom, a project some random person in Nebraska has been thanklessly maintaining since 2003. That little component that is Log4J within this context, because it is a logging framework used. I’m not going to say by everyone and everything. However, Ernst & Young, and Wiz, did a study and they actually found that 93% of cloud environments are vulnerable to Log4Shell. What they found, 45% security protection, 20% have high privileges, that’s awesome. That also means that 80% do not. Seven percent are exposed to the internet, so we’ve got some real risk there. This Log4j, Log4Shell, it was a 10, so this is obviously quite a big deal.
The vulnerability scoring, what does it mean when it’s a 10? I’ve got a screenshot here. It’s a little complicated. It’s based on a number of factors. I have configured it here, we can see the attack vector. For example, can I do it via a network, or do I have to be physically there? Similarly, attack complexity. Is it easy or is it hard to complete this vulnerability? Similarly, do I need privileges, user interaction? We start to have these different scores. I would check it out at CVSS Calculator. You can start to play around. This example, I can do it on a network. It’s not hard. I don’t need any privileges or user interaction. That’s going to be a 10. What does that actually mean? We have this calculator, and the way that it is ranked is you have your criticals. They are your 9 and 10s. In terms of prioritization, this is where you want to start. I’ve worked in application security a long time, and we always say, start with the 10s. Start there. Then go down to the 9.9s, the 9.8s, we go to one decimal place, and you’ve fixed all your 9s. Awesome. Ok, let’s move, let’s go to the highs, because what you’ll often find with vulnerabilities is you can actually start to link them together. For example, a high, maybe this is a denial of service. Medium, you’re probably not worrying too much, I have a nice Bumblebee. However, these are things like path traversal. If you dive into a directory, and you manage to get in somewhere, if you can move around, what happens when you end up in the etc folder? Then we want to get to a happy world where we only have lows. Ideally, we want to remove those too, but we want to work our way down. Have your line in the sand, and keep moving that number lower, and also have security checkpoints as you go.
Why Log4Shell could be the worst software vulnerability ever. Hopefully, now you can get a better understanding. It was a very common component, and it was a very high vulnerability. It was very easy to execute. The way it was accessed was before any authentication. That was the thing, poor Log4j. If you think of the person in Nebraska, this is how the Log4j maintainers felt because after the critical, that was a remote code execution, so someone could go in and execute code remotely. Then next came a denial of service. That was fixed in the version 2.16. Then came another denial of service, so fixed in 2.17. Then finally, another RCE, another remote code execution. You’ve really got a mix. That’s just this one component. When you think about your microservices, unfortunately, it might look a little bit like this, you’re going to have bugs all over the place. What we’re trying to prevent is, here, I have Log4j, and maybe it’s got in within a particular process, within a particular service, for example. Then, what happens if it starts moving around, and all of a sudden, it’s within your main application within your critical data. What we really want to do is we want to have checkpoints that will prevent this vulnerability traversal, and our beautiful ladybugs from getting into our microservices systems.
Common Pain Points
Common pain points, Log4j. How do you know what is where? Also, on this, how do you prioritize? Siloed teams, so lack of ownership and accountability. The person that creates the service, do they then own the service? Are they responsible for security? Who owns what, for example? Then, delays, fails, or worse. What this means is you don’t want to have any bugs in your system. Then, what happens if something goes down? We hope in a microservices system, you can fix things quickly, patch quickly, scale back up quickly. What happens if the bug moves around? Also, what happens if your team is spending the majority of that time fixing bugs, or doing patches, or fighting fires? You really don’t want to have that from a cultural perspective, also in terms of careers and attrition as well. Ultimately, you do not want to be front page because of a vulnerability, because that obviously has massive impacts for brand as well.
Who?
Who are we talking about? I’m going to talk to you about my silos. I’m really talking about the main three. I do have one over there, sometimes that’s corporate. Let’s focus on these three. I’ve got my developers. They might be provisioning microservices. They might be provisioning environments. They might be running the applications within the microservices. They might be developing processes. They’re usually going to be the ones writing the code. Developers, I used to be one, we like new, shiny. We tend to be quite intelligent, quite creative. We like to stay within our tools and within our world. Then you have security. Security are usually outnumbered. I’ve seen different ratios. I’ve visited a lot of different organizations in my time in security. One time I spoke to an organization, they had about 130 devs, and 2 security, so 1:65. I wanted to applaud them, because I was like, that’s the best I’ve ever heard. Because on the opposite end, I’ve been to places where it’s 300 to 1 AppSec. Similarly, when you’ve got tens of thousands of developers, ok, you may have a large AppSec team, but there’s always going to be quite an outnumbered ratio. One of the things I like to think about is, how can we empower developers? How can we get everyone to be 1% better? I’m a big fan of compound interest and incremental gains. I’m not saying we want security to become developer and vice versa. Start to break down these silos. Let’s start to be on the same team. Let’s start to work together.
Finally, you have Ops. While devs like new and shiny, Ops likes stability, literally measured. One of the areas of importance is reliability, availability, latency, uptime. Ops wants things to be stable. Square peg, square hole, sometimes devs come along with this weird shape. There can be a bit of an issue there, and DevOps is great for that. What I’m really talking about, it’s not a case of buy all these tools, do all these processes. It is more of a case as well about culture and about changing how we work together, because we are all on the same team. It’s not about the bad guy who did this. Who did what? When there is a situation, how do you deal with it? Is there psychology of fear? Is there a culture of fear, or is it more proactive? Ok, there’s a problem, who knows what? How do we fix it? Then afterwards in the debrief, how can we do better next time? Because when you have these three groups, there’s kind of tribes, they speak differently, they have different influences, different importance. It’s very much about getting everyone to work on the same team and work together. Because you don’t want to end up like this. Here I have my Ops silo is not doing so good. It’s a little bit broken. Because what might have happened somewhere within our environment, there might be a cryptominer that got into one of our Kubernetes instances, and has spawned 200 pods. We’re maxing out our AWS bill, and Ops has been paged in the middle of the night, what’s going on? How do we fix this? What we want to do is have a more proactive approach, think more secure, secure by design.
Who Owns What?
Who owns what? This is a very good place to start when you think about microservices, because ownership and accountability is very important. Martin Fowler did a very famous article, Nine Components of Microservices. One of the ones he talked about was cross-functional teams. Because also when you’re working, instead of having your tribes like my silos before, it’s like, why don’t we do cross-functional teams, and organize around capability? We organize around products. Instead of having a project mindset, we go to more of a product mindset. I’m like, this is the mobile app team, or this could be customer login, or however it might want to be. With these cross-functional teams, where is security? Do we have a security within each team? Do we have one about to five? Could we maybe get everyone to be a little bit more secure? How can we facilitate security, because there’s a lot of worry in a CISO’s life? It’s very much about thinking who owns what and where is security within all of these processes as well. Prioritization is a very important one to think about. High risk, start there. Whether that’s the types of vulnerabilities or whether that is the types of applications you want to go for your high risk and your high exposure first.
If anyone’s come across Eisenhower decision matrix, I actually use this for productivity. You want to think of your four by four, urgent and important. If something is urgent and important, like maybe a critical vulnerability, you’re going to want to do that. Similarly, if something is important, but not as urgent, plan to do it. Maybe those would be your lower vulnerability bugs, for example. If it’s urgent but not important, delegate. If it’s not really urgent or important, is it worth your time? Because I’m sure there’s a lot of other things you should or could be doing that are urgent and important. This applies not only to day-to-day life, but also to vulnerabilities as well. It’s also worth mentioning false positives, because unfortunately, they do come across. They can come along when you have security, there can be instances where there are false positives. What I say when those come across is, see if there’s any way that you can sanitize your results, because developers don’t really like being given false positives. See, if you’re working with security tools or vendors, how many false positives are you getting? Is it like 1%? Is it 99%? Is it an issue? Is it not? Also, how do you identify them? How do you know that something is a false positive? Because the person who says it’s a false positive, they might be a developer who might not have a full understanding. Coming back to my silos, that’s why we want to give developers and Ops and everyone just more of a secure mindset. Then, you’re just not introducing these vulnerabilities. As much as we don’t like false positives, what is even worse, actually, is a false negative. I personally would rather have a larger list, which might have false positives than a smaller list, but then miss some bugs. Because then if it’s unknown, if you don’t know that you’ve got vulnerabilities in production, then you’re in more trouble than if you just have a list of false positives.
How?
How are we going to do this? I’m going to talk about some common frameworks. I’m going to go a bit granular and talk about some specific vulnerabilities. The reason is, is a very popular insecurity and it’ll give you a bit more of a like, huh? Because I’m hoping if I can make everyone in the school 1% better at security, my job here will be done. The Center of Internet Security has this awesome framework of control. This is V7, I think there’s just been a V8. What I would recommend for everyone, just focus on the basic, let’s start there. Especially with security, we want to build strong foundations and move our way up. For example, the first two are just inventory and control of hardware, and then our software assets. Yes, do this. Back to Log4j, do you know what is where? For example, Log4j, do you have a software bill of materials listing all of your open source components for all of your different microservices? Simple, yes, no, maybe.
Similarly, some of these other ones, for example, continuous vulnerability management, adding the security. Looking at admin privileges. Configurations, and also maintenance, monitoring, analysis of logs. Logs are so important. As a developer, I didn’t like writing them, I just found it a bit tedious. As security, I’m like, that’s so important. Because when something does go wrong, and unfortunately, it probably will, you really want to know what is happening. If you have someone bad in your system, what are they doing? If you don’t have logs, and if you’re not monitoring, then you’re not really going to know. They could do a lot of damage. I’m hoping if you haven’t already, at least now you’re inspired, work on the basic.
Then we can move to foundational. There’s a lot of good stuff going on there. I’m going to talk a little bit about application software security, because that’s where Log4j was. Also, these lessons can be applied throughout. It’s not just applications. It’s the entire of microservices architecture as well. I’m going to talk to you a little bit about OWASP, which is the Open Web Application Security Project. They do a top 10 of types of vulnerabilities. They just released a new one for web applications at the end of last year. I’m one of these cool people, I know my top 10. You could have asked me, what’s A6? Then I would have been able to tell you, security misconfiguration, but it’s changed now. The new top 4 is insecure design. This is a very interesting one, because for one, it actually includes a lot of types of vulnerabilities. Vulnerabilities have something called a CWE, which is a type. There are like over 40 types within this insecure design, because you’re really starting to shift or start left looking at threat modeling and thinking, how can we design securely? Because it’s a bit like if you have a car, you don’t want to test brakes by crashing your car into the wall to see if the brakes works. You probably want to test your brakes before then. Maybe you just want to design good brakes, because brakes are quite important in cars. The reason I’m highlighting this is because this is the first time in any of the OWASPs that we’ve really started to think more laterally about how we design software.
The other ones that are worth mentioning, because I’ll be talking about them. Access control, I give it a thumbs up. It basically means like, you’re in our system, you can do whatever you want. You don’t really want to be telling everyone to do whatever you want. Similarly, authentication. This is your front door. For example, who are you letting in? If you’re letting anyone in and you’re letting them do anything or everything, you’re not going to have a good time. This is the web version, great for developing applications. Within microservices, we use a lot of APIs. There’s an API top 10 as well. We have our API top 10. If we look at the two vulnerabilities I was talking about, so your thumbs up, for example, can everyone do what they want? Yes. That’s our number 1 and 5. I’m going to go into what these mean, so don’t get intimidated about the difference between broken object level authorization and broken functional level, because hopefully you’ll understand by the end. Also, authentication. This is our top two. This is with APIs, how accessible, how easy is it for people to get in because you’re not authenticating correctly?
Authentication and Authorization
Authentication and authorization, what do they actually mean? I know I’ve alluded to it, but Okta had a great definition. Authentication on the left confirms users are who they say they are. You just want to be doing that at every stage. I always say this about security, who can do what, and how? Because with the authentication, we’re looking at the who. Like, who are you? Are you really you? Then the authorization on the right, gives users permissions to access a resource. This means, ok, I’ve checked who you are. You’re allowed to do this. I had a house example. My authentication, that’s my front door. Then my authorization is, once let in, can they open the safe, for example? You can get quite sophisticated at this. For example, you might need a worker to come in, and they only need access to the spare room and they have a key to get into a cupboard where there are pipes or something. It can get very sophisticated. This is what we recommend. You do not want to have an open door and permissions for everyone.
Broken authentication. An API is vulnerable if it permits a couple of things. Credential stuffing, this is, if you’ve got a big list on the dark web of credentials, and you just start like going for it, stuffing credentials within APIs. Similarly, brute force. For example, maybe you’ve got a username, you don’t know the password, so you’re brute force attacking. Within APIs, you might have lack of resource and rate limiting, which is, how many times can you hammer an API in a certain amount of time? That’s another vulnerability in itself. When we look at authentication, we want to have it even stricter, specifically around the credentials. If you have weak passwords, weak encryption keys. Also, if you’re sending sensitive details in the URL, you do not want to include tokens. For example, if we think back to my beautiful picture of microservices with all the bugs, maybe our first bug was just a leak, we leaked a token. Maybe that token was someone senior, or maybe they weren’t, but once we got in as that user, we found we could actually do path traversal. We could wander around the system. You are as strong as your weakest link. You really want to try and lock your front door very suitably. In terms of preventing it, make sure you know all your possible flows, mobile, web, different links, SSO, one-click.
Also, speak to your engineers. This is really why we need to work together because when security are talking to development, and vice versa, we are all on the same team. We’re trying to secure our microservices and our applications. Just have a constructive conversation about it without any single blame. Similarly, read about authentication mechanisms. Understand what you’re doing. There’s a lot of who, what, how? Also, there’s a lot of standards. Don’t reinvent the wheel, don’t write your own authentication. There are some really good ones out there. Because if you’re writing your own hashing mechanisms, token generation password storage, I would recommend definitely using standards because they will be more secure. Unless, obviously you’re a developer of one of the standards. Also, OWASP, they have a great authentication cheat sheet.
I’m going to start to talk about authorization. Authentication, that’s our front door, how can we let in? Now someone’s in the house. Looking at APIs, I want to get my information. I’m user1, I just want my data. This is my command, GET user1. It’s returned user2. All of a sudden, I wanted my bank details but it’s given me someone else’s. Unfortunately, this is what happened to T-Mobile a few years ago. Their customer data was plundered, thanks to a bad API. This is the thing, these types of vulnerabilities, it’s a sure way to start harvesting data. How do we prevent? The thing about authorization, the real thing is permissions. Make sure that you have an authorization mechanism, you have user policies and hierarchy. Use your hierarchy. Is everyone an admin? I hope not, because that sounds very insecure. Does everyone need to be an admin? Having your different hierarchies of who can do what and how, is a definite tip. I hope everyone can take that with them. Also, using a mechanism to check logged in users, if they have access to perform what they want to do, because maybe someone can try and do something. Just checking, are you allowed to do that? Then also, use random and unpredictable values for your IDs, because if it’s sequential, a bit like I tried to get user1, and I got user2, then, if it’s more random, you will be a little bit safer. This is object level. I wanted user 1, I wanted my bank details, I got someone else’s.
Broken function levels. This is slightly different. I want to get roles. I’m a user and I want to get some stuff. GET user, maybe I want to get a list of roles. Then I got the admin access, I got everything. This is also a real problem. As before it was objects, now we deal with functions. This is what happened to Gator smartwatches. This is an IoT smartwatch, and it started exposing kid’s personal data. I find as well, when companies that deal with baby monitors, or kid’s smartwatches, or anything personal, I find it always hits a little bit harder. How do we prevent this? Similar. Having consistent and easy to analyze authorization modules, so role based access control, having your hierarchy. Also, from a security standpoint, really, to be secure, deny by default. Have that mindset, deny everything. Then when people need stuff, then maybe we allow. Look at your endpoints, and then be aware of any function level authorization flows there might be, because you also will have to look at the business logic. You might need to look at what’s going on, because the reality is you can’t deny everything by default, because then no one can do anything. You have to have acceptable flows. Just making sure those are as secure as possible. Because something that can often catch people out, for example, Lambda. Lambda ephemeral appears, it does a job. It goes. What permissions does that Lambda have? What can it do? Yes, it might be only available around for three seconds every Tuesday. If someone managed to intercept and then take control, and that’s the Lambda between two very important and not authenticating applications, all of a sudden, you might have a problem.
Now What?
Now what? I come from DevSecOps, and I think there’s a couple of lessons to be learned from there. The main one is multiple security gates, because security is not an add-on at the end. It’s like, we’ve built our microservice, now let’s add security. It doesn’t quite work like that. There’s a picture I’m going to show with a lot of information, so I’m going to break it down. Here, we have Dev, and Ops, and they’re working together. I’m going to break it. First, we’re going to look at the dev side. These are the developers. Maybe we’re provisioning environments. Maybe we’re writing the code. Maybe we’re doing the app. Let’s think about threat modeling while we’re planning. Maybe we want to have integrations within our developer environment, within IntelliJ, for example, or any other IDE. What application security scanning are we doing? If we’re doing Terraform, are we using Checkov or any other providers to check that our Terraform script isn’t going to accidentally be unauthenticated to users? For example, other ones in the fuzzing integration test, maybe have a Chaos Monkey, if you’re into Netflix, and you want to start destroying things, is a good idea.
Similarly, when we move over to more of the cloud, to the Ops side, now we have our application, it’s been tested, we’ve done the first half, but then we want to release it. Then we’re going to start to do, prevent, detect, respond, predict. There’s a lot of areas within that, for example, integrity checks, penetration tests, network and monitoring. I personally think incident response is one of the most important roles, just because, at some point, unfortunately, something will go wrong. How quickly can you recover from that? That is one of the key DORA metrics as well, is mean time to remediation when something goes bad. If someone drops your table, how long does it take to get back up and running? There’s a lot of information, you can see we’ve got 30, maybe 40 different security checks. Yes, you do need all of them, but we can’t all start having from zero to hero. What it’s about is taking that mindset, so when you think about your front doors, when you think about who can do what, start thinking about your different microservices and how we can stop this vulnerability traversal by having security at every stage. Having it incremental, because it’s a lot better for me to figure out I’ve got a problem in my Terraform script when I see a Checkov scan, versus when it’s in production because we’ve just been hacked because I wrote bad Terraform, for example.
The truth is, and this is a great quote by someone called Brian Foote, if you think good architecture is expensive, try bad architecture. The same is true for security. If you think, all those tools, you’re like, how much is that going to cost? Yes, security isn’t cheap. Also, neither is being on the front page, and especially in the last two years, we’re seeing massive rise in ransomware, for example. It used to be that people would just take our data, for example, Marriott Hotels, Equifax, lots of customer data. Now what we’re seeing is, ok, now you’ve got ransomware, now you’ve been [inaudible 00:34:26]. It’s the same approach to security where you really want to try and have good security and add that to your already good architecture.
Why?
Why is this important? Why are we here? Why do we care? Microservices are awesome. I think so. Pros and cons, obviously. What you end up with is cross-functional teams, which is also awesome, and develop, test, problem solve, deploy, and update services independently. Instead of a monolith, you can scale, you can configure at the microservice level. This will lead to faster deployments and troubleshooting turnaround times. When we add security, when we shift left with security, so for example, to Terraform example, using Checkov scanning, or having app scanning as we’re writing our different processes, this will mean less downtime. It will also mean less frustration. Actually, I don’t think developers, security, operations, no one likes having bugs. No one actually likes them. Security sometimes like finding them, because then it’s like, ok, I found a bug, job tick. Actually, we don’t want to have bugs. Especially with the frustration, if you find your team is spending majority of the time firefighting, dealing with everything, you’re going to be having a bad time. If you shift left, you end up being able to deliver better software faster. All of a sudden, if you can deliver faster, if you’ve got less downtime, if you can listen to your customers and you can hear what they’re looking for, and you can deliver it quickly, and you can innovate and you can iterate, all of a sudden, you’re going to start outperforming your competitors. Doing this all together will drive true business value.
Summary
Prioritize high risk, and high exposure vulnerabilities. Start with the 10s. Unfortunately, you might have some. Go have a look around. Let’s start there, then the 9.9, 9.8. Think, who can do what and how? Is our front door locked? Is it closed? Are we rejecting people from doing whatever they want? That can be applied everywhere in the world of security. As you’re going back to your day jobs after this, have a real think about, in terms of with your microservices, how easy is it for malactors to get in, and what can they do? Once you have that, having multiple security gates. Where I come from, DevSecOps, think about having this iterative approach to security, so that you don’t end up with, hopefully, any high risk, high exposure vulnerabilities. If you are specifically interested in Log4j, I work for GitLab, we wrote this great blog about it. You can see the link here, https://about.gitlab.com/blog/2021/12/15/use-gitlab-to-detect-vulnerabilities/.
Questions and Answers
Wells: Any advice on how you get developers to feel responsible for security or to care about it?
Chaplin: Yes, it’s a bit of a tricky one, especially with security. It’s almost like passing the hot potato between different personas. Something that can be really effective I’ve found with developer groups is make it fun, and make it automated. When I say make it fun, what you can do is you can have, for example, like an internal hackathon. It could even just be an afternoon. What you can do is you can either have red team and blue team or put everyone on the red team. What normally happens is you end up with the senior developers, and then they normally know about a few skeletons in the closet, so they will find bugs. Then especially with the junior developers, that’s something like, is it this easy? That can make them realize, maybe I really should care about security. Because also with security, there’s been a real change over the last five years. It used to be, if you got hacked, all of your personal data or customer data, for example, would go, which is obviously very bad. Now it’s ok, so if you get hacked, ransomware.
I was actually talking to someone literally last weekend, who works in IT. Their son is actually autistic and loves Hula Hoops. Because KP Snacks got hacked by ransomware, a few months ago, he could not get Hula Hoops to his son. His son was like, “Daddy, these are not Hula Hoops.” With the way that security is moving ransomware, you really do not want to be hacked because then it starts to become serious decisions. First point, make it fun. The second one, automate it where you can. I really liked the question about the IDE plugin, because you want to shift left. You want to get it within the existing developer workflow. You don’t want to have a case of where you’re going through line by line. You want to have a case where a tool goes through line by line, you get your results. The results are then delivered to a developer in a way they understand. Ideally coupled, you can also get secure coding integrations. I used to work at a company called Secure Code Warrior. What will happen is, if a developer is told, it’s insecure deserialization. What does that mean? When actually that’s to do with the way that it breaks down from bigger options down to the block of string. Yes, helping developers understand the importance of security, making it fun with a hackathon. Also, with the hackathon, you don’t even have to have real prizes, you can have an Uber Eats voucher. You could be nice, you could have an iPad, but even recognition. Recognition is so important. For example, a letter or an email from the CISO, or the VP of engineering, “Well done for your activity in a hackathon.” Some people want public recognition, some want private. Also doing that as well. I think definitely awareness, carrot and stick with it. Where you can, make it fun and make it automated as well.
Wells: I worked somewhere where they did something similar, and I was a bit nervous about doing it but it was so much fun. There was so much guidance on here’s what you can do, that it was absolutely great. I second it, it was really fun.
We have Black Duck, WhiteHat in our CI/CD, is that enough or should we be thinking about other tools also?
Chaplin: Black Duck, from my knowledge, focuses on open source. That might be the components that you’re getting from Maven Central or PyPI. That is one piece of the puzzle. What GitLab offers within its pipeline, you have static code analysis, that’s the code the developer writes. You have your open source, so that’s what Black Duck is doing. You do dynamic application security testing. Are you leaking anything in headers because it’s the running application? Secret detection, are there any API keys or any other credentials out there? API fuzzing. If you’ve got APIs especially in microservices, that’s important. Code quality. I know there’s about seven or eight. Black Duck is a great place to start. All developers use a lot of open source, but there is really a very big piece of the puzzle. One of the benefits of using GitLab is you get all of that testing in one place, and you get all the results in one place. Then the developer at the point of code commit, can run their security scans, they can see, I added Log4j version 2.17, that still has lots of vulnerabilities. I need to upgrade to 2.19, for example, or whichever the latest version is. It’s a good place to start. I would recommend thinking about having other security tools.
Wells: As we close the platform with security first in mind, we do perform penetration testing twice a year via a specialized third party company. What is your thought on that? Is it too much? Is it not enough?
Chaplin: It depends if it’s the only thing you’re doing. Because you’ve said you create a platform with security first in mind, to me, like if I just read the first sentence, I’m like, you’re thinking about the secure by design I mentioned at the beginning. Maybe you’re doing threat modeling, you’ve potentially got the security in the IDE, then all the others in the CI/CD. If you have a security first approach, and then you’ve got security integrated, then, yes, pentests, they’re good, but are they enough? I’ve worked with an organization, what they did, they were actually using Veracode. That’s another static code analysis. They would go to the pentesters and say, these are the bugs we already know about from our security tests, we want you to find the advanced ones. I think that’s where pentests really comes into its element, because you’ve got your low tier easy bugs, and that’s where you can get the automated security tools. You want your pentesters to really try and find some weird and funky angles, similar to say like bug bounties. For me, the difference between pentesters and bug bounties, it’s almost the same skill set. It’s just different people doing them. I don’t know enough about the rest of the security you’re doing. If you’re only doing pentests twice a year, that isn’t enough. You want to look at some of the other approaches I mentioned. If you already have everything, then yes, I’d say twice a year, or every couple of months is a good amount. Just make sure they’re focusing on the advanced pentesting.
See more presentations with transcripts
MMS • Kimberly Fox
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
Transcript
Shane Hastie: Hey folks, QCon London is just around the corner. We’ll be back in person in London from March 27 to 29. Join senior software leaders at early adopter companies as they share how they’ve implemented emerging trends and best practices. You’ll learn from their experiences, practical techniques and pitfalls to avoid, so you get assurance you’re adopting the right patterns and practices. Learn more at, qconlondon.com. We hope to see you there.
Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today I’m sitting down with Kimberly Fox from Market to Table Podcast, and bringing some ideas around cooking. So completely different to what’s normal, but I think we’re going to have a great conversation. Kimberly, welcome. Thanks for taking the time to talk to us today.
Kimberley Fox: Thank you. I’m really excited to be here.
Shane Hastie: Tell me a little bit about Kimberly. Who’s Kimberly… that Market to Table?
Introductions [01:03]
Kimberley Fox: Yeah, absolutely. So this started… I started out working in analytical chemistry, and I went back to graduate school for food science after six years working in analytical chemistry, and I ended up graduating in May 2020 when the pandemic hit. And there weren’t a lot of food science… there weren’t a lot of jobs in general during that time. And so I ended up starting a food blog and I really enjoyed it, and I eventually moved into a way where I wanted to find a way to monetize it. At the time, cooking classes were really popular because everyone was at home, and people were looking for things to do. And I eventually kind of switched gears and went into the corporate team building space, and now I exclusively do corporate team building and I specialize in empowering women in STEM, and building inclusive teams. So I’m really excited to be able to talk about it.
Shane Hastie: Cooking classes to empower a woman in STEM and create inclusive teams?
Kimberley Fox: Yep?
Shane Hastie: There’s got to be a link there. Please make that link for us.
Making the link between cooking and team building [02:17]
Kimberley Fox: Yeah, so I spent over 10 years in STEM, and at the time when I was looking at corporate team building, that’s not where I went at all. I’ll just be very frank, that is not a connection I initially made. At the time, I was being kind of forced, essentially, to do weekly team building. And I was sitting there one day and I was like, “Why do these suck so much?” I think team building, when you think about it, it’s like forced participation, forced connection… as like an employee, I looked at it as, how can I get out of this? And as a manager, they want your team to work better so that they can create better products so that they can push innovation forward. And then you have the employee on the other end being like, “I don’t want to be here, get me out of here. What do I have to do to get out of here?”
And so that’s one part of why I wanted to pursue corporate team building is because I actually wanted it to be effective. I wanted it to be fun where the whole entire team looked forward to going, and participating, in a team building activity. So that was one part. The other part is that I realized after reflecting over 10 years, I spent some time working in Northern Ireland, and when I was there, they were having the same conversations around women in STEM, that we were having in the US, and it was almost like a light bulb moment for me, where I realized there is nothing wrong with me, and where I’m in a completely different culture and they’re having the exact same conversations of women in STEM.
Creating a culture where women want to stay [03:51]
And so I personally just am really passionate about women in STEM, and it’s about creating a culture at your company where women actually want to stay. Statistically, women leave STEM within five to seven years, and it’s not because of the gender reasons that we usually think of, such as maternity leave, and poor work-life balance, things like that. It’s actually because of lack of communication, unsupportive managers, that’s what women want, and they’re not feeling like they belong to their team, and so they’re leaving. And so part of my “why” as to these corporate classes, is to help managers, help teams create a culture in which women actually want to be a part of and stay.
Shane Hastie: So what does that culture look like?
Kimberley Fox: I think that culture looks like where women are respected, where they’re allowed to talk, they’re not talked over, and their perspective is welcomed rather than not welcomed. And what I mean by that is, a lot of times, especially in male-dominated STEM culture, especially in engineering, is that a woman brings a different perspective to the team. And so she’s the only woman in the room along with all of these other men, and then she says something kind of pushing back and saying, “Well, what about this?” Sometimes her voice isn’t heard because everyone else is thinking the same way and she’s the only one with a different perspective on it. When a leader looks and sees that that one person’s perspective is actually really valuable because she’s seen it in a completely different way than everyone else, then I think that that is what an inclusive culture is. And being valued is what makes a woman want to stay and feels like she belongs and that she’s able to contribute to the team.
Shane Hastie: So how do we say that in this day and age, we want inclusive teams, we know the benefits, and it’s been well researched, it’s been clearly publicized, but we’re still not getting there?
Kimberley Fox: No.
Shane Hastie: What’s going on?
Kimberley Fox: I know. We’re not. And what blows my mind is that this past summer I worked with a lot of nonprofit organizations who work with women in STEM. There’s women in data, there’s women in engineering, there’s all of these nonprofit organizations that are working to try to help women in STEM. And yet when you get in the company, we’re still struggling to create inclusive cultures for women. And in these nonprofit organizations, women feel more comfortable coming out and expressing what their frustrations and what they’re experiencing as a woman in STEM. And it’s like, why is this still a big issue?
And I think part of the reason… I’ve been on teams where we know what’s going on, we know that women aren’t being, their voice isn’t being valued. We know that they’re being overheard, but no one does anything about it. And I think one of the best things leaders can do is say, “This is not tolerated”, and kind of emulate how the team should work as a cohesive unit. And so in that respect, I really do believe that it comes from the leader. I’ve been on a team where the leader had zero tolerance for any sort of gender related comments or anything like that. And as a result, had an extremely inclusive team in which everyone’s voice was heard and there were no stupid ideas.
Shane Hastie: So creating that space where every voice is heard, and every voice is not just heard but welcomed. So how does a cooking class help that?
Cooking classes to help build collaborative culture [07:46]
Kimberley Fox: Yeah. And so I was very intentional on how I designed these classes because I knew what outcome I wanted, but I needed to be able to design an experience in which that it resulted this outcome. And so what I did was, is that I started studying a lot of people who are experts, who gather people. That is their main job is to gather people. And I looked at how they gathered people, and so I used cooking and cocktails to connect people. The cooking and cocktails is just fun, but it’s really about how the team connects throughout this experience that is the magic of it. Of course, they leave with a cocktail, of course, they leave with a delicious meal. And so I do it from a personal story and a personal experience perspective, is being able to engage with each individual throughout the class.
And also we do an activity where I ask people to gather and share some personal experiences, and I create a theme for each class depending on where the team is at. And I say this because, someone who is hiring a new employee and wants to bring them into the fold of the team, is going to be a lot different vibe and dynamic, as a team that just went through a merger and they need to work together as a team. They’re two completely different reasons to gather, and they’re both really good reasons. And so when I’m working with leaders to design the experience, these are the kind of questions that I ask so that I can tailor the experience to where their team is at, at that time.
Shane Hastie: What about the team that is distributed?
Kimberley Fox: Remotely?
Shane Hastie: Yeah.
Doing these activities remotely [09:33]
Kimberley Fox: Yes, that is actually, which I should have mentioned is, so these are virtual. I do in-person locally where I live, but the majority of what I do is virtually, because tech is virtual right now, and people love working virtually. And that is the future. And so everyone comes, everyone is in their own kitchen on screen, which is fun. And we all cook together virtual, and we all cook together virtually. So I’m in my kitchen, everyone else is in their kitchen. And the classes are designed so that if you’ve never cooked before, if you hate cooking, that you’ll be able to still make a standout dish at the end. So there’s no pre requirements for taking the class.
Shane Hastie: Building on that remote experience and possibly the hybrid, this adds an extra layer to the cultural complexity today.
Kimberley Fox: Yes.
Shane Hastie: How do we overcome that? How do we weave this into the effect?
Don’t put the burden of communication on a new team member [10:30]
Kimberley Fox: It absolutely does. I think the example that I can think of right off the top of my head is that if you’re hybrid, or if you’re on a team where the majority of people are in-person, but you’re a remote worker, or your whole entire team is remote, how does that remote person know who to contact for what, and how do they learn to work with different people when they’re not physically running into them in the office? And a lot of people say, “Oh, reach out and ask for coffee dates with each person.” And when I think of that, I think, well, now you’re just putting the burden back on the new employee to reach out to all of these people that they’ve never met, to ask for a coffee date. And on the other side, they might be really introverted and be like, “I’m not going to do that. That’s not something I would enjoy.” Or you’re asking another person who wouldn’t enjoy that either.
And so I think that in a remote environment, we need to bring people together. We need to have some structure when we bring people together so that people can connect. I like to call it like a non-workplace because we have to create those fun online spaces for people just to have fun and be able to enjoy one another. And that’s another reason why this team building kind of fits that. But there’s so many other ways that you’re able to just bring people together online so that they can just enjoy each other’s company. And I know people have been constantly thinking of new ways to do this because they really see the value in just being able to gather and not necessarily have an agenda.
Shane Hastie: Gather without an agenda in a space where we can have fun.
Kimberley Fox: Mm-hmm.
Shane Hastie: How do I make sure that that gathering without an agenda in a space where we can have fun is truly inclusive?
Gathering without an agenda in a space where we can have fun in a truly inclusive way [12:25]
Kimberley Fox: That’s really hard, right? And that’s a really good question. It’s because with holiday parties, they say, “Oh, we want everyone to gather.” But then it doesn’t end up being inclusive because people just go and talk to the same people all the time. And it doesn’t really create a new dynamic. And so, when I say not an agenda, I meant not a work agenda, not that you should just bring people together randomly and not have some sort of structure to what you’re going to do. If everyone really likes board games, you could do a virtual board game or something like that, so that it creates just enough structure so that people know what to do or where to go, but not so much structure where they’re in that situation that you just described. It doesn’t feel inclusive, and it actually ends up having the opposite effect of isolating people.
Shane Hastie: The other point that you touched on when we were chatting earlier was, empowerment. How do we create an environment where people can feel empowered.
Enabling people to be empowered [13:33]
Kimberley Fox: I think it starts with having some empathy and kindness. And the reason why I say that and from a women’s perspective, is that we have a lot of unconscious bias. I don’t think the majority of people are biased. I think the majority of people are very humble, and they want women to feel included. They want everyone to feel included, and they don’t actually realize when they might have a bias towards something. And so as a woman, and this is just an example, is that I was in a really heated conversation one time with my male coworker, and he put his arm out in front of me to move me aside to show me, and I’m putting quotes up the “right way” to do it. And I looked at him and I was just like, “Would you ever do that to a man?” And he looked at me in all of the blood just drained from his face. And he had this a-ha moment where he goes, “Oh my gosh, I am so sorry.” Genuinely sorry that that’s what he did, because he realized that he wouldn’t, and that it felt very biased to me.
But in that situation, it’s like I felt his embarrassment that he did that. And so shaming him in that instance, being like yeah, you did, don’t do it again, or something like that, isn’t going to make him feel good. It’s not going to make my working relationship with them any better. But instead, you respond with empathy in these situations, say, “We’re all learning, you didn’t mean to do it.” And it can be a positive experience. And so that’s another thing that I do bring into my cooking classes is that, just try and bring in some empathy and understanding of where people are in their lives, and bring in some personal experience because that is really how we connect. And that actually leads to empowerment, because if everyone feels like they belong there and that their voice is heard, then they’re going to feel empowered to speak up and share their perspectives and be able to contribute to the team, which ultimately leads to more growth for the company because the team’s working better.
Shane Hastie: Kimberly, some really interesting thoughts here… some good advice. You do cooking. Give me a recipe. What’s your favorite recipe that I can listen to and-
A recipe for Graham Cracker Peach Crumble [16:01]
Kimberley Fox: I’ll give you… okay, on my site, there is a Graham Cracker Peach Crumble that it’s chef’s kiss. Like I developed this thing, and I gave it to all my neighbors. I gave it to everybody, because I’m so proud of this Graham Cracker Peach Crumble. And so if you’re looking for just a standout, it’s going to work, and I’m going to impress people dish, that’s where I would send you. And it’s a one bowl, one pan dish, which makes it very approachable for hopefully all of the listeners.
Shane Hastie: So do you want to talk us through?
Kimberley Fox: Sure, absolutely. I like peeling peaches. You don’t need to do it, but I don’t really like the fuzziness. And so you slice up the peaches, you mix it with a little bit of sugar, and then you top it with the graham cracker crumble, and the graham cracker crumble has brown sugar, crushed up graham crackers, a little bit of flour, and a very healthy dose of salt, which creates that sweet and salty combination that I particularly enjoy. And so you’ve spread that over the peaches and you bake it in the oven probably for, I think it’s about 40 minutes. And then the peaches bubble underneath and the top gets really nice and crisp, and then you just… I recommend a little scoop of vanilla ice cream on tap so that it melts on the tap and it’s quite delicious.
Shane Hastie: My mouth is watering.
Kimberley Fox: I saw you perk up when I said peaches. So it really is a very different recipe that I haven’t really seen anyone do before, and it’s absolutely outstanding. So I hope you have an opportunity to try it.
Shane Hastie: I will indeed. And listeners, if you try it, let us know what it’s like. Kimberly, thanks so much. And yeah, this has been fun. If people want to continue the conversation, where do they find you?
Kimberley Fox: You can go to my website frommarkettootable.com where you’ll also find the recipe for the peach crumble, but it has all the information about my corporate cooking classes. And if you’re someone who is like, I want to create an inclusive team, or you already have an inclusive team, but you just want to create a non-workplace environment that we talked about, I would love to hear from you. And you can also email me directly at Kimberly@frommarkettotable.com.
Shane Hastie: Thank you so much.
Kimberley Fox: Thank you.
Mentioned
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.
MMS • Deepak Vohra
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- PHP 8 adds support for union types. A union type is the union of multiple simple, scalar types. A value of a union type can belong to any of the simple types declared in the union type.
- PHP 8.1 introduces intersection types. An intersection type is the intersection of multiple class and interface types. A value of an intersection type must belong to all the class or interface types declared in the intersection type.
- PHP 8 introduces a
mixed
type that is equivalent to the union typeobject|resource|array|string|int|float|bool|null
. - PHP 8 introduces a
static
method return type , which requires the return value to be of the type of the enclosing class. - PHP 8.1 introduces the
never
return type. A function returningnever
must not return a value, nor even implicitly return with a function call. - PHP 8.2 adds support for
true
,null
, andfalse
as stand-alone types.
This article is part of the article series “PHP 8.x”. You can subscribe to receive notifications about new articles in this series via RSS. PHP continues to be one of the most widely used scripting languages on the web with 77.3% of all the websites whose server-side programming language is known using it according to w3tech. PHP 8 brings many new features and other improvements, which we shall explore in this article series. |
In this article we will discuss extensions to the PHP type system introduced in PHP 8, 8.1, and 8.2. Those include union, intersection, and mixed
types, as well as the static
and never
return types.
Additionally, PHP 8 also brings support for true
, null
, and false
stand-alone types.
Some definitions
Type declarations in PHP are used with class properties, function parameters, and function return types. Various definitions are often used to describe a language in relation to its type system: strong/weak, dynamic/static.
PHP is a dynamically typed language. Dynamically typed implies that type checking is made at runtime, in contrast to static compile time type checking. PHP is by default weakly typed, which implies fewer typing rules that support implicit conversion at runtime. Strict typing may however be enabled in PHP.
PHP makes use of types in different contexts :
- Standalone type: A type that can be used in a type declaration, examples being
int, string, array
- Literal type: A type that also checks the value itself in addition to the type of a value. PHP supports two literal types
- true
andfalse
- Unit type: A type that holds a single value, as an example
null
.
Besides simple types, PHP 8 introduces composite types such as union types, and intersection types. A union type is the union of multiple simple types. A value has to match just one of the types in the union type. A union type may be used to specify the type of a class property, function parameter type, or function return type. The new type called mixed
is a special type of union type.
PHP 8.1 also adds intersection types to specify class types that are actually the intersection of multiple class types. Two new return types have been added. The return type never
is used if a function does not return, which could happen if the function throws an exception or calls exit()
, as an example. The return type static
implies that the return value must be an instanceof
the class in which the method is called.
Union Types
If you are familiar with Venn diagrams you may remember set union and intersection. Union types are introduced in PHP 8 to support the union of simple types. The syntax to use for union type in a declaration is as follows:
Type1|Type2|....|TypeN
To start with an example, in the following script $var1
belongs to union type int|string|array
. Its value is initialized to an integer value, and subsequently the value is set to each of the other types in the union type declaration.
var1;
$a->var1="hello";
echo $a->var1;
$a->var1=array(
"1" => "a",
"2" => "b",
);
var_dump($a->var1);
The output from the script is as follows:
1
hello
array(2) { [1]=> string(1) "a" [2]=> string(1) "b" }
Being PHP a weakly typed language, if $var1
’s value is set to float
value 1.0
, an implicit conversion is performed. The following script will give an output of 1
.
var1=1.0;
echo $a->var1;
However, if strict typing is enabled with the declaration declare(strict_types = 1)
, $var1’s
value won’t get set to 1.0
and An error message is displayed:
Uncaught TypeError: Cannot assign float to property A::$var1 of type array|string|int
Weak typing can sometimes convert values to a closely related type, but type conversion cannot be always performed. For example, a variable of union type int|array
cannot be assigned a string value, as in script:
var1="hello";
echo $a->var1;
An error message is displayed:
Uncaught TypeError: Cannot assign string to property A::$var1 of type array|int
In a slightly more complex example, the following script uses union types in a class property declaration, function parameters, and function return type
.
var1;
echo $a->fn1("hello","php");
The output is:
1
hello
Null in union types
A union type can be nullable, in which case null
is one of the types in the union type declaration. In the following script, a class property, function parameters, and function return type are all declared with a nullable union type.
var1;
echo $a->fn1();
The false type in union types
The false
pseudo-type may be used in a union type. In the following example, the false
type is used in class property declaration, function parameters, and function return type, all of which are union type declarations.
var1;
echo $a->fn1("hello",false);
The output is:
1
hello
If bool
is used in a union type, false
cannot be used, as it is considered a duplicate declaration. Consider the following script in which a function declares a function parameter with a union type that includes both false
and bool
.
<?php
function fn1(string $a, bool|string|false $b): object {
return $b;
}
An error message is displayed:
Duplicate type false is redundant
Class types in union types
A class type may be used in a union type. The class type A
is used in a union type in the following example:
<?php
class A{}
function fn1(string|int|A $a, array|A $b): A|string {
return $a;
}
$a=new A();
var_dump(fn1($a,$a));
The output is:
object(A)#1 (0) { }
However, a class type cannot be used in a union type if the type object
is also used. The following script uses both a class type and object
in a union type.
<?php
class A{}
function fn1(object|A $a, A $b): A {
return $a;
}
An error message is displayed:
Type A|object contains both object and a class type, which is redundant
If iterable
is used in a union type, array
and Traversable
cannot be used additionally. The following script uses iterable
with array
in a union type:
<?php
function fn1(object $a, iterable|array $b): iterable {
}
An error message is displayed:
Type iterable|array contains both iterable and array, which is redundant
Union types and class inheritance
If one class extends another, a union type may declare both classes individually, or just declare the superset class. As an example, in the following script, class C
extends class B
, which extends class A
. Classes A, B
, and C
are then included in union type declarations of function parameters.
fn1();
}
echo fn1($c,$c);
Output is:
Class C object
Alternatively, fn1
could declare only class A
as the function parameters’ type, with the same output:
function fn1(A $a, A $b): string {
return $a->fn1();
}
Void in union types
The void
return type cannot be used in union type. To demonstrate, run the following script:
<?php
function fn1(int|string $a, int|string $b): void|string {
return $a;
}
An error message is displayed:
Void can only be used as a standalone type
Implicit type conversion with union types
Earlier we mentioned that, if strict typing is not enabled, a value that does not match any of the types in a union type gets converted to a closely related type. But, which of the closely related types? The following order of preference is used for implicit conversion:
- int
- float
- string
- bool
As an example, a string value of “1” is converted to a float in the following script:
var1="1";
var_dump($a->var1);
?>
The output is
float(1)
However, if int
is included in the union type, output is int(1)
. In the following script a variable of union type int|float
is assigned a string value of “1.0”.
var1="1.0";
var_dump($a->var1);
?>
The output is:
float(1)
In the following script the string value “true” is interpreted as a string
value because the union type includes string
.
var1="true";
var_dump($a->var1);
?>
Output is:
string(4) "true"
But, in the following script the same “true” string is converted to a bool
value because string
is not in the union type:
var1="true";
var_dump($a->var1);
?>
Output is;
bool(true)
In another example, with rather unpredictable output, consider the script that assigns a string value to a variable of union type int|bool|float
.
var1="hello";
var_dump($a->var1);
?>
Output is:
bool(true)
The string
is converted to a bool
because conversion to an int
or float
cannot be made.
The new mixed type
PHP 8 introduces a new type called mixed which is equivalent to the union type object|resource|array|string|int|float|bool|null
. As an example, in the following script, mixed
is used as a class property type, function parameter type, and function return type. Strict typing is enabled to demonstrate that mixed
is not affected by strict typing.
fn1(true));
var_dump($a->var1);
$a->var1="hello";
var_dump($a->var1);
The flexibility of mixed
is apparent with the different types in output:
bool(true)
int(1)
string(5) "hello"
It is redundant to use other scalar types in a union type along with mixed as mixed
is a union type of all other scalar types. To demonstrate this, consider the script that uses mixed
in a union type with int
.
<?php
class A{
function fn1(int|mixed $a):mixed{ return $a;}
}
An error message is displayed:
Type mixed can only be used as a standalone type
Likewise, mixed
cannot be used with any class types. The following script generates the same error message as before:
<?php
class A{}
class B{
function fn1(A|mixed $a):mixed{ return $a;}
}
The mixed
return type can be narrowed in a subclass method’s return type. As an example, the fn1
function in an extending class narrows a mixed
return type to array
.
<?php
class A{
public function fn1(mixed $a):mixed{ return $a;}
}
class B extends A{
public function fn1(mixed $a):array{ return $a;
}
New standalone types null, false and true
Previous to PHP 8.2, the null
type was PHP’s unit type, i.e. the type that holds a single value: null
. Similarly, the false
type was a literal type of type bool
. However, the null
and false
types could only be used in union type and not as stand-alone types. To demonstrate this, run a script such as the following in PHP 8.1 and earlier:
var1;
The script outputs error message in PHP 8.1:
Null can not be used as a standalone type
Similarly, to demonstrate that the false
type could not be used as a stand-alone type in PHP 8.1 or earlier, run the following script:
<?php
class A{
public false $var1=false;
}
The script generates error message with PHP 8.1:
False can not be used as a standalone type
PHP 8.2 has added support for null
and false
as stand-alone types. The following script makes use of null
as a method parameter type and method return type.
<?php
class NullExample {
public null $nil = null;
public function fn1(null $v): null { return null; }
}
null
cannot be explicitly marked nullable with ?null
. To demonstrate, run the following script:
<?php
class NullExample {
public null $nil = null;
public function fn1(?null $v): null { return null; }
}
An error message is generated:
null cannot be marked as nullable
The following script makes use of false
as a stand-alone type.
<?php
class FalseExample {
public false $false = false;
public function fn1(false $f): false { return false;}
}
null
and false
may be used in a union type, as in the script:
<?php
class NullUnionExample {
public null $nil = null;
public function fn1(null $v): null|false { return null; }
}
In addition, PHP 8.2 added true
as a new type that may be used as a standalone-type. The following script uses true
as a class property type, a method parameter type and a method return type.
<?php
class TrueExample {
public true $true = true;
public function f1(true $v): true { return true;}
}
The true
type cannot be used in a union type with false
, as in the script:
<?php
class TrueExample {
public function f1(true $v): true|false { return true;}
}
The script generates an error message:
Type contains both true and false, bool should be used instead
Similarly, true
cannot be used in a union type with bool
, as in the script:
class TrueExample {
public function f1(true $v): true|bool { return true;}
}
The script generates an error message:
Duplicate type true is redundant
Intersection types
PHP 8.1 introduces intersection type as a composite type. Intersection types can be used with class and interface types. An intersection type is used for a type that represents multiple class and interface types rather than a single class or interface type. The syntax for an intersection type is as follows:
Type1&Type2...TypeN
When to use an intersection type, and when a union type? If a type is to represent one of multiple types, we use a union type. If a type is to represent multiple types at once, we use an intersection type. The next example best illustrates the difference. Consider classes A
, B
, and C
that have no relation. If a type is to represent any of these types use a union type, as in the script:
fn1();
}
echo fn1($c,$c);
?>
The output is:
Class C object
If we had used an intersection type in the script, an error message would result. Modify the function to use intersection types:
function fn1(A&B&C $a, A&B&C $b): string {
return $a->fn1();
}
An error message is displayed:
Uncaught TypeError: fn1(): Argument #1 ($a) must be of type A&B&C, C given
The intersection would be suitable if class C
extends class B
extends class A
, as in the script:
fn1();
}
echo fn1($c,$c);
?>
The output is:
Class C object
Scalar types and intersection types
Intersection types can only be used with class and interface types, but cannot be used with scalar types. To demonstrate this, modify the fn1
function in the preceding script scalar type as follows:
function fn1(A&B&C&string $a, A&B&C $b): string {
}
An error message is displayed:
Type string cannot be part of an intersection type
Intersection types and union types
Intersection types cannot be combined with union types. Specifically, intersection type notation cannot be combined with the union type notation in the same type declaration. To demonstrate, modify the fn1
function as follows:
function fn1(A&B|C $a, A&B|C $b): string {
}
A parse error message is output:
Parse error: syntax error, unexpected token "|", expecting variable
An intersection type can be used with a union type in the same function declaration, as demonstrated by function:
function fn1(A&B&C $a, A|B|C $b): string {
}
Static and never return types
PHP 8.0 introduces static
as a new return type, and PHP 8.1 introduces never
as a new return type.
Static return type
If a return type is specified as static
, the return value must be of the same type as the class in which the method is defined. As an example, the fn1
method in class A
declares a return type static
, and therefore must return a value of type A
, which is the class in which the function is declared.
fn1()->var1;
Output is:
1
The function declaring a static
return type must belong to a class. To demonstrate this, declare never
as return type in a global function:
<?php
function fn1(): static
{
}
An error message is displayed:
Cannot use "static" when no class scope is active
The class object returned must be the enclosing class. The following script would generate an error because the return value is of class type B
, while the static
return type requires it to be of type A
.
<?php
class B{}
class A
{
public int $var1=1;
public function fn1(): static
{
return new B();
}
}
The following error message is generated:
Uncaught TypeError: A::fn1(): Return value must be of type A, B returned
If class B
extends class A
, the preceding script would run ok and output 1
.
class B extends A{}
The static
return type can be used in a union type. If static
is used in a union type the return value doesn’t necessarily have to be the class type. As an example, static
is used in a union type in script:
fn1();
Output is
1
The type static
cannot be used in an intersection type. To demonstrate this, consider the following script.
<?php
class B extends A{}
class A
{
public function fn1(): static&B
{
return new B();
}
}
An error message is generated:
Type static cannot be part of an intersection type
Return type never
If the return type is never
, the function must not return a value, or return at all, i.e, the function does not terminate. The never
return type is a subtype of every other return type. This implies that never
can replace any other return type in overridden methods when extending a class. A never
-returning function must do one of the following:
- Throw an exception
- Call
exit()
- Start an infinite loop
If a never
-returning function is never to be called, the function could be empty, as an example:
<?php
class A
{
function fn1(): never {
}
}
The fn1()
function in class A
cannot be called as it would imply the function returns implicit NULL
. To demonstrate, modify the preceding script to:
fn1();
An error message is generated when the script is run:
Uncaught TypeError: A::fn1(): never-returning function must not implicitly return
The following script would generate the same error message as the if
condition is never fulfilled and the function implicitly returns NULL
:
<?php
function fn1(): never
{
if (false) {
exit();
}
}
fn1();
Unlike the static
return type, never
may be used as a return type of a function not belonging to the scope of a class, for example:
<?php
class A
{
}
function fn1(): never {
}
A function with return type as never
must not return a value. To demonstrate this, the following script declares a function that attempts to return value although its return value is never
.
<?php
function fn1(): never
{
return 1;
}
An error message is generated:
A never-returning function must not return
If the return type is never
, the function must not return even implicitly. For example, the fn1
function in the following script does not return a value, but returns implicitly when its scope terminates.
<?php
function fn1(): never
{
}
fn1();
An error message is displayed:
Uncaught TypeError: fn1(): never-returning function must not implicitly return
What is the use of a function that declares return type never, and does not terminate? The never
return type could be used during development, testing, and debugging. A function that returns the never type could exit with a call to exit()
. Such a function may even be called, as in the following script:
<?php
function fn1(): never
{
exit();
}
fn1();
A never
returning function could throw an exception, for example:
<?php
function fn1(): never {
throw new Exception('Exception thrown');
}
A function including an infinite loop could declare a never
return type, as in the example:
<?php
function fn1(): never {
while (1){}
}
The never
return type can override any other type in derived classes, as in the example:
<?php
class A
{
function fn1(): int {
}
}
class B extends A{
function fn1(): never {
}
}
The never
return type cannot be used in a union type. To demonstrate this, the following script declares never
in a union type.
<?php
class A{
function fn1(): never|int {
}
}
An error message is displayed:
never can only be used as a standalone type
The never
type cannot be used in an intersection type. To demonstrate this, the following script uses never with class type B
.
<?php
class B{}
class A{
function fn1(): never&B {
}
}
An error message is generated:
Type never cannot be part of an intersection type
Scalar types do not support aliases
As of PHP 8, a warning message is generated if a scalar type alias is used. For example, if boolean
is used instead of bool
a message indicates that boolean
would be interpreted as a class name. To demonstrate this, consider the following script in which integer
is used as a parameter type in a function declaration.
The output from the script includes a warning message is as follows:
Warning: "integer" will be interpreted as a class name. Did you mean "int"? Write "integer" to suppress this warning
Fatal error: Uncaught TypeError: fn1(): Argument #1 ($param) must be of type integer, int given
Returning by reference from a void function is deprecated
As of PHP 8.1, returning by reference from a void
function is deprecated, because only variable references can be returned by reference whereas a void
return type does not return a value. To demonstrate this, run the following script:
The output is a deprecation message:
Deprecated: Returning by reference from a void function is deprecated
Summary
In this article we discussed the new types related features introduced in PHP 8, including union, intersection, and mixed
types, and the static
and never
return types. In the next article, we will describe new features relating to PHP’s arrays, variables, operators, and exception handling.
This article is part of the article series “PHP 8.x”. You can subscribe to receive notifications about new articles in this series via RSS. PHP continues to be one of the most widely used scripting languages on the web with 77.3% of all the websites whose server-side programming language is known using it according to w3tech. PHP 8 brings many new features and other improvements, which we shall explore in this article series. |
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
SUNNYVALE, Calif., Jan. 19, 2023 — Satori today announced that it is adding support for MongoDB, a scalable and flexible NoSQL document database used for high-volume data storage and processing.
MongoDB has more than 37,000 customers in over 100 countries, and the MongoDB database platform has been downloaded over 300 million times. MongoDB users can now take advantage of Satori’s data security platform to reduce risk, improve productivity, and ensure compliance.
Through just-in-time self-service access workflows, users can remove the risk associated with overprivileged access to sensitive customer data stored within MongoDB Community, Enterprise, or Atlas servers. Companies can boost productivity by eliminating the overhead on DevOps teams by integrating with any Security Assertion Markup Language (SAML) identity provider and managing access by automatically enforcing policies. Satori also makes compliance easier to achieve by providing a centralized, rich, and searchable query audit log to demonstrate that access to data is on a need-to-know basis.
“We hear from many of our customers that they want one solution for managing access to data across all of their data stores, in a way that reduces the overhead on admins while providing a great experience for end users,” said Yoav Cohen, CTO, Satori. “With this new partnership, users will benefit from Satori’s built-in, continuous sensitive data discovery and simplified self-service access control to better manage the immense amounts of data stored across MongoDB.”
Many organizations are still managing access to data manually. It’s not unusual for requests to go through many different teams before access is granted, and engineers are pulled away from their primary responsibilities to respond to these requests. This process is laden with bottlenecks and can take weeks to complete, drastically hindering innovation. With Satori, users can quickly define access control workflows and automatically grant approvals without any engineering resources. Compliance is also streamlined as all data access requests and responses are recorded.
“We use MongoDB extensively when providing services to leading healthcare providers, and require a solution to audit access to data, as well as set security and access policies in a simple way. Satori helps us do that in a streamlined and efficient manner, so we can grow faster and meet our security and compliance requirements,” said Aviv Levit, Data & TechOps Team Lead, Yuvital. “Satori has helped reduce the complexity involved in managing our sensitive data and provides just-in-time access so we can make data-driven decisions and drive innovation.”
Satori for MongoDB is available for Satori customers. Satori already offers full relational database support for Snowflake, Amazon Redshift, Amazon Athena, MySQL, MariaDB, CockroachDB, Azure SQL, and others. Satori plans to further expand support for other NoSQL databases.
About Satori
Satori is revolutionizing data security. Its data security platform seamlessly integrates into any environment to automate access controls and deliver complete data-flow visibility utilizing activity-based discovery and classification. The platform provides context-aware and granular data access and privacy policies across all enterprise data flows, data access and data stores. With Satori, organizations and their data teams can confidently ensure that data security, privacy and compliance are in place – enabling data-driven innovation and competitive advantage.
Source: Satori
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
2022 was another busy year for data management, as the demand for enhanced approaches for optimized data usage to drive better business outcomes continued unabated.
In a year in which many organizations struggled to deal with the ongoing impact of the pandemic, inflationary concerns and the specter of economic slowdown, data remained the fundamental atomic unit. With data, organizations can make better decisions with data analytics, business intelligence and databases continuing to be the foundation for modern applications and operations.
Among the key data management trends observed in 2022 was the ongoing convergence of different database technologies, including online analytical processing (OLAP), online transaction processing (OLTP), relational and technologies.
Transactional and analytical database technologies converge
For decades, databases have been segmented by use case. Transactional databases are one thing, while analytical databases are another and organizations tend to run both types of databases to support business operations.
A key data trend in 2022 was the continued movement away from the segmentation toward a converged, unified database model that integrates support for both OLAP and OLTP. It’s an approach that is known by a few different terms, including translytical as well as hybrid transactional and analytical processing (HTAP).
Multiple vendors upgraded their unified database platforms in 2022. Database vendor PingCap advanced its TiDB HTAP database with the launch of a database as a service (DBaaS) offering.
Oracle extended MySQL HeatWave to run on AWS, marking the first time the service can run outside of Oracle Cloud Infrastructure. The service expanded further in October with data lakehouse capabilities, enabling users to query data stored in cloud data lakes.
Google entered the converged database game with its AlloyDB database, launched in May. AlloyDB is based on the open source PostgreSQL transactional database and integrated analytics query acceleration for OLAP workloads.
The MongoDB 6.0 NoSQL database is also getting into the converged space, providing users with analytics query acceleration. The idea of converging analytics and transaction workloads is also coming to the Snowflake data cloud with a capability known as Hybrid Tables, announced in June.
Relational databases converge with NoSQL document databases
The convergence of capabilities in databases also extended in 2022 to bringing relational and NoSQL database technologies closer together.
SingleStore updated its namesake unified database to version 8.0 with new data query acceleration functionality specifically for JSON document data types, which in the past had largely been the domain of purpose-built NoSQL databases.
The convergence of relational and NoSQL is also coming to Oracle’s namesake database. Oracle Database 23c includes a feature called JSON relational duality that enables users to integrate JSON document data with the Oracle relational database model. Support for JSON was also part of the PostgreSQL 15 relational database update.
Big money continued to pour into data vendors in 2022
Among the key data trends in 2021 was the large amount of money vendors were able to raise during the year. It’s a trend continuing in 2022 across multiple segments of the data market, demonstrating the growing demand for collaboration, data integration, data observability, data lakehouse and data intelligence capabilities.
In January, data collaboration vendor Observable raised $35.6 million in funding as the company built out its capabilities for enabling organizations to better work together with data assets.
The growing demand for data lakehouse capabilities was reflected in a $160 million raise by Dremio in January, followed by a $250 million raise by rival Starburst in February. With the data lakehouse model, organizations are able to use a cloud data lake to act like a data warehouse for analytics. Getting data in and out of the lakehouse and cloud data system is another challenge and one that helped reverse ETL vendor Census raise $60 million.
While there is a trend toward converged databases, that doesn’t mean standalone analytical or transactional databases are going away. Analytics database vendor Imply raised $100 million for its platform based on the open source Apache Druid database. DataStax, which is one of the leading commercial supporters of the Apache Cassandra database, raised $115 million.
Another area of investment in 2022 was for data intelligence and data observability technologies. Monte Carlo raised $135 million for its data observability platform, while Alation raised $123 million to help push its data intelligence and data catalogue functionality forward.
There is likely to be less fundraising activity in the space in 2023, due to inflation coupled with concerns about the recession and overall economic slowdown. However, the core 2022 data management trends that saw the convergence of analytics and transactional capabilities, as well as relational capabilities being integrated with NoSQL, are not trends that will disappear anytime soon and will likely continue for years to come.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
NoSQL document-oriented database provider Couchbase said it was adding Microsoft Azure support to its Capella managed database-as-a-service (DBaaS) offering.
This means that any enterprise customer who chooses Capella will be able to deploy and manage it on Azure in a streamlined manner after it is made generally available in the first quarter of 2023, the company said.
“Providing flexibility to go across cloud service providers is a huge advantage in today’s multi- and hybrid-cloud world. By extending Capella to Azure, we can better support our customers as they deploy innovative applications on the cloud of their choice,” Scott Anderson, senior vice president of product management and business operations at Couchbase, said in a press note.
Capella, which builds on the Couchbase Server database’s search engine and in-built operational and analytical capabilities, was first introduced on AWS in June 2020, just after the company raised $105 million in funding. Back then, Capella was known as the Couchbase Cloud, before being rebranded in October 2021.
In March 2021, the company introduced Couchbase Cloud in the form of a virtual private cloud (VPC) managed service in the Azure Marketplace.
A virtual private cloud (VPC) is a separate, isolated private cloud, which is hosted inside a public cloud.
In contrast to Couchbase Capella, which offers fully hosted and managed services, Couchbase Cloud was managed in the enterprise’s Azure account, a company spokesperson said.
Couchbase had added Google Cloud support for Capella in June last year. According to Google Cloud’s console, the public cloud service provider handles the billing of the database-as-a-service which can be consumed after buying credits.
“Although you register with the service provider to use the service, Google handles all billing,” the console page showed. On Google Cloud where the pricing is calculated in US dollars, one Capella Basic credit cost $1 and one Capella Enterprise credit costs $2. Pricing for one Capella Developer Pro credit stands at $1.25, the page showed.
Unlike Capella’s arrangement with Google Cloud, enterprises using the database-as-a-service on Azure will be billed by Couchbase and doesn’t need to interface with Microsoft, a company spokesperson said, adding that the pricing was based on a consumption model without giving further details.
Couchbase, which claims Capella offers relatively lower cost of ownership, has added a new interface along with new tools and tasks to help developers design modern applications.
The new interface is inspired by popular developer-centric tools like GitHub, the company said, adding that the query engine is based on SQL++ to aid developer productivity.
The DBaaS, which has automated scaling and supports a multi-cloud architecture, comes with an array of application services bundled under the name of Capella App Services that can help with mobile and internet of things (IoT) applications synchronization.
MMS • Michael Hausenblas
Article originally posted on InfoQ. Visit InfoQ
Transcript
Hausenblas: My name is Michael Hausenblas. I work in the AWS Open Source Observability Service Team. I want to talk about state of OpenTelemetry: where we are, and what is next.
What Is Observability?
Let us have a very quick look at what observability really is. Observability is the capability to continuously generate and discover actionable insights based on signals from the system under observation with the goal to influence that system. We have sources, those might be compute, like a Kubernetes cluster or a Lambda function, a database, datastore. Those sources generate signals. We have agents, and then we have destinations, backends where we store these signals, and we graph these signals, and we interact and filter and alert on these signals. A human might consume that signal too, investigate something, understand something, or a piece of software, think of, for example, autoscaling. What’s with the agent? The piece of software that sits between the sources and the destinations, collects all the signals and ingests them into the backend destinations.
Signals
We’re dealing mostly with four major signal types. Logs, which are signals that have a textual payload. They’re capturing events. They’re mostly meant for humans, to be consumed by humans. We have metrics which are numerical signals, aggregates that have typically their semantics encoded in the name and/or via labels. They carry numerical values. Then we have distributed traces that are all about propagating an execution context along a request path. Then we have profiles, which OpenTelemetry not yet covers, but in the future, hopefully. Those are about the resource usage in the context of the code execution.
The Problem and Solution
What is the problem we’re trying to solve here? The first bit is really all about the journey from the sources to the destination. We have currently widely a number of different agents that we use to collect the source signals and ingest them into backends. The solution going forward is replace all these various agents, these proprietary protocols, and formats, and vendor specific agents with one agent that rules it all, and that is OpenTelemetry. Not just the agent, but also the instrumentation.
OpenTelemetry Concept
Let’s have a closer look at what is OpenTelemetry on a conceptual level. Formerly, OpenTelemetry or OTel, is a Cloud Native Computing Foundation project, CNCF project. You might know CNCF from big hits like Kubernetes, and also Prometheus, and many others. What does OpenTelemetry really do? It provides a set of specifications, a protocol, OTLP, an agent that we call collector, and libraries, SDKs. Again, think of it, sources, agent, destination, OpenTelemetry sits in the middle. OpenTelemetry aims to support all major signal types. Currently, we’re focusing on traces, metrics, and logs, across 11 programming languages, from Java, over Python, to things like Erlang and Elixir. The big advantage of OpenTelemetry besides that it’s an open standard and all the vendors, and all the ISVs, and all the cloud providers that are behind it, it’s really that it turns this telemetry challenge, instrumenting your code and collecting the different signal types, ingesting them, into table stakes. It makes it table stakes. On top of that, you get correlation of different signal types, so you can more easily jump between these different signals.
OpenTelemetry Collector
If we zoom in, in the middle, into this collector, how does that look like? Conceptually, we’re talking about so-called pipelines. This is a per signal type, so a pipeline for logs, a pipeline for metrics, a pipeline for traces, a pipeline, future, potentially for profiles. That, again, conceptually have three different types of components that you can use there, a bit like Lego bricks. You have receivers, those are inbound or ingress, where from the signal sources, from the bottom, downstream signals come into the collector. For example, you might have an OTLP, so a native OpenTelemetry receiver. Then there are processors, in the middle of the pipeline you want to do something, for example, logs. You might want to drop certain logs, or redact them because there’s PII, Personally Identifiable Information in there. Or you want to batch them, so rather than sending one signal after there, you batch it up for 10 seconds, or for whatever number of metrics, for example, or traces. Then there are the exporters, which allow you to ingest those signals into the backend destinations, for example, to Jaeger, Prometheus. You can have many pipelines. You can have many pipelines that cover the same signal type. You can treat them independently. You could have one log pipeline for one specific environment like development that lands the logs in a certain backend, with another one for production. You see, this OpenTelemetry Collector is a very substantial part of the OpenTelemetry project and the overall value prop. What are the three main components in the pipeline? It is receiver, processor, and exporter. The pipeline wires up these three component types, and let you build these different routing and filtering pipelines as you see fit.
Distros
There are three ways or three fundamental approaches to how you can use the agent, the collector. Different vendors and different cloud providers indeed have different approaches to that. I just used the official documentation, the opentelemetry.io/vendors. For each of the providers, I dug into the descriptions and tried to figure out what are the different signal types that they are currently providing, in what state, like GA, or preview, or beta? How do they deal with the collector? Are they themselves maintaining collector to use in the upstream collector, which is provided by the project? What’s with the SDKs? Is there a specific SDK, or again, upstream? If the relative, across the board, the providers have managed OTLP endpoint, so natively allow you to ingest OpenTelemetry data?
OpenTelemetry Adoption
That’s a basic overview on OpenTelemetry. Let’s see, in terms of adoption. I will present two different survey data. Here, on the one hand, the first two slides on the OpenTelemetry community quarterly survey. Not very surprising, given where we are with the adoption, traces went GA in 2021. Metrics are going GA as we speak. Number of those things are stable, we’ll get back to that in the roadmap. Logs will be going GA in 2023. It’s not too surprising that currently, half of the people who responded to that survey said they’re using it for tracing, which makes a lot of sense. A third for metrics, roughly. Looking into the future, then the picture slightly changes, again, to be expected that logs will take a bigger part, and metrics as well, slightly.
Continuing with this survey, again, asking about what components, in the widest sense, both collector and across the program languages, and there you see that at least collector, and Go, Java, Python, and JavaScript. Leading the pack, Go doesn’t surprise me again too much, because the whole cloud native system, from Kubernetes to Prometheus to the OpenTelemetry Collector are written in Go, so there is a certain affinity there for early adopters, at least.
Moving on to a second survey, which I self-ran, and essentially asked people to provide their feedback. The first two are really just setting the scene. What agents are you currently using? I was a little bit surprised to see already quite a good share, two-thirds, saying that they are using the OpenTelemetry Collector. It might be a selection by us. Folks who are already using OpenTelemetry are more open to responding to that survey. Then, in the backend destinations, where do you send signals to? Prometheus is clearly leading the pack there, followed by others, or across the board, CloudWatch and Elasticsearch. Most interesting that, really, I want to point your attention to this, is, what are the biggest pain points of your current agent setup? Interestingly enough, lack of correlation, is with half of the respondents, indeed, the number one, which is a perfect fit for OpenTelemetry to be there, very honest. Followed by too many agents. Obviously, that’s the value prop of OpenTelemetry. You want to consolidate rather than having multiple agents running, you want to have one agent there.
Moving on to the second part, I asked about adopting OpenTelemetry. What’s the motivation, what drives you to adopt OpenTelemetry? Both industry standard because it is an industry standard, and to reduce vendor lock-in are pretty much the two main reasons why folks are adopting OpenTelemetry. Ask further and you see that 71 out of 91 people here answered this question. If you’re already using OpenTelemetry, what setup are you using? Indeed, that reflects also earlier on distro survey that I presented, that a good share are using upstream distro and collector, which is in line with what you would expect, because the majority of distributions indeed use upstream. There are certain challenges when you’re using upstream or roll your own. It’s not bad, don’t get me wrong, but it means that you’re responsible, you’re on the hook. You need to security patch it. You need to make sure that the resource usage is in place. You’re responsible for all the things that are going on in the collector.
One last bit of information here, which I also found very interesting, assuming that someone already is into OpenTelemetry, what are the reasons that slow you down? What are the road blockers? What are the paper cuts? Yes, also, again, very much to expect. People saying, almost half, what I need, for example, logs is not yet fully available. That is, again, not a big surprise. That is probably to be expected, given where we are in mid-2022. Other insights there that we as a community need to work on, lack of documentation, or tutorial available, and the software not stable enough. That includes also the SDKs.
Roadmap
Now that you have a somewhat better understanding of the adoption, where and how and why folks are using OpenTelemetry, let’s have a look at the roadmap. Where are we? Where are we going? Distributed traces already are GA, end of 2021. Everything is stable there. You can use it in production. Metrics, this year in May to be precise, most of these things became stable. We’re still in this process of various SDKs implementing the metrics, making their turn into GA. Release candidates exists, and you can use metrics in production. Logs, on the other hand, are still under active development. While on the protocol level, we are stable, there are a number of things that yet need to be figured out. That’s where we need your feedback, we need to understand, what exactly is the usage? What are the expectations? How do you want to use logs? Clearly, as you can see from the data, people want logs. People are, essentially, to a certain extent also waiting for logs to be available in GA so that they can finally start to consolidate and adopt everything.
Summary
OpenTelemetry is the vendor-neutral telemetry standard. It’s an open standard for all signal types. It enables you to instrument once and ingest anywhere, making telemetry effectively table stakes. Vendors at large have agreed upon the fact that they do not want to compete on the telemetry bits, of the agents, the performance there, and so on, but on the backends, allowing you to consume the different signals, correlate them and so on. OpenTelemetry has broad industry adoption. All major ISVs in this space, all major cloud providers are behind it, have respective teams, and myself an example, a product manager for our distribution of OpenTelemetry at AWS. This is really something that in terms of investment, if you ask yourself, should I be investing in OpenTelemetry? This is a big plus. This is something where you have the safety and security of the future.
In 2021, traces went GA. This year, metrics go GA. 2023, logs will go GA, which means if you’re considering adopting OpenTelemetry, now is the time. There’s super interesting stuff going on in the community. Earlier this year, we had an initiative bringing profiles to signal type, think of continuous profiling, things like Pixie, Parca, Pyroscope, bringing that into OpenTelemetry. There’s a working group around that, and you can participate if you want. Then there’s real-user monitoring. There are collector improvements. There are so many things going on. By and large, currently, the focus is really on logs. Once logs are out of the door, then the community will probably move on and focus on the other things that I mentioned here. I’m currently writing a book with Manning called, “Cloud Observability in Action,” where I’m covering the topics as well.
See more presentations with transcripts
Creating Environments High in Psychological Safety with a Combined Top-Down and Bottom-Up Approach
MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ
Leadership is critical for making psychological safety happen, but they need to lead by example and show that it’s safe for people to take interpersonal risks. Complementing leadership with team workshops in communication skills can enable people to speak up and feel safe to fail.
Jitesh Gosai shared his experience with psychological safety at Lean Agile Scotland 2022.
Applying ideas from psychological safety can enable people to speak up in teams about what they don’t know, don’t understand, or mistakes they have made, as Gosai explained in Learnings from Applying Psychological Safety. Trust and creating safe spaces are essential for all teams. Enabling people to push out of their comfort zones and take risks without fear of punishment or embarrassment is critical for people to be able to speak up. People need to feel that they will not be punished or embarrassed if they take interpersonal risks.
Working with teams Gosai found out that communication skills are not evenly spread, with some members being much more skilled than others:
We thought that if teams could communicate more effectively, members might be more willing to speak up when asked.
This led them to create a top-down approach to develop leadership behaviours that encourage people to take interpersonal risks, but also, a bottom-up approach that equipped team members with the skills to communicate more effectively.
To get started with the bottom-up approach, they developed and ran a series of workshops involving different teams across the department to improve skills in active listening, question-asking abilities, and how to get and give feedback:
One of our objectives with these workshops was for each to be stand-alone so teams could choose which they attended, but also allow them to get hands-on with the skills and walk away with something valuable that they can use immediately.
Their initial surveying of teams showed high workshop ratings for engagement and learning outcomes and were successful in terms of improving people’s skills in communication. They were less sure if people were using those newfound skills. This is where the top-down approach comes into play; working with the team leads to encouraging people to speak up using their new skills, Gosai mentioned.
When psychological safety levels increase, leaders expect the number of problems to go down, but quite the opposite happens, Gosai mentioned:
Teams with high levels of psychological safety tend to report more issues and therefore appear to have more problems than those with low psychological safety.
Leaders may believe making it safe to fail will result in people purposely causing more failures. But what this doesn’t take into account is that people don’t naturally like to fail, Gosai explained:
From an early age, we are taught that failure is bad, so it doesn’t feel good when things don’t go as planned, and we try and limit failure. Therefore it is unlikely to lead to the blasé attitude that the team leads worry will occur. While there is a chance the odd individual may take this view, assuming all team members would behave this way would be an overreaction. Besides, if there are people with this attitude in your teams, it would be better to know sooner rather than when a major catastrophe occurs.
InfoQ interviewed Jitesh Gosai about psychological safety.
InfoQ: What made you combine a top-down and bottom-up approach?
Jitesh Gosai: These two approaches will work in a symbiotic relationship. Leaders of teams would show that it’s safe for interpersonal risk-taking. At the same time, team members would feel more confident in taking interpersonal risks as they would have the communication skills that help them articulate themselves better.
Then when leaders made the case that we need people to share what they do and don’t understand, team members would have the necessary skills to do so.
InfoQ: How did you get started with this approach?
Gosai: We looked at taking interpersonal risks as a nice-to-have and not the focus of the workshops for the time being. We could introduce interpersonal risks later with the team leads’ support. First, we wanted to be able to say we’d improve team communication skills objectively. We did this by sending them through the workshops and surveying the participants to see if it improved their skills.
The next stage for us will be developing the top-down approach by working with team leads and helping them understand how their behaviours and language can encourage or discourage interpersonal risk-taking. But also how they could measure levels of psychological safety in their teams. From there, we want to work with teams and help them connect how their communication skills can help them take interpersonal risks and demonstrate that nothing terrible will happen as a result, but quite the opposite.
InfoQ: What problem did you see when trying to address psychological safety?
Gosai: Team leads often worry that all this speaking up will slow the team down or tie them up in knots discussing and debating topics unrelated to everyday work. But team members are people and will be affected by current events.
If you want them to do their best thinking, that may mean giving them the space to discuss those matters with people they may spend as much time with as their own families. But leaders should set boundaries based on organisational policies, the consequences of crossing those boundaries and the reporting mechanisms when those boundaries have been crossed.
MMS • Johan Janssen
Article originally posted on InfoQ. Visit InfoQ
VMware has released Spring Cloud 2022.0.0, codenamed Kilburn, featuring updates to many of the Spring Cloud sub-projects. Built upon Spring Framework 6 and Spring Boot 3, introduced in November 2022, Spring Cloud is aligned with Java 17 and compatible with Jakarta EE 9. This release supports Ahead of Time (AOT) compilation and the creation native images with GraalVM.
Spring Cloud Commons now supports weighted load-balancing by configuring the property, spring.cloud.loadbalancer.configurations
, as weighted
. The OAuth integration now uses the new OAuth2 support from Spring Security.
Spring Cloud Gateway now supports Micrometer for observability as a replacement for Spring Cloud Sleuth. CORS may be disabled via the property, spring.cloud.gateway.globalcors.enabled
, and may be configured per route as metadata with the cors
key:
spring:
cloud:
gateway:
routes:
- id: myroute
uri: https://www.infoq.com/
predicates:
- Path=/myroute/**
metadata:
cors
allowedOrigins: '*'
allowedMethods:
- POST
Spring Cloud Kubernetes now supports fabric8 6.2.0 and version 17 of the Kubernetes Java Client. The Kubernetes specific annotation, @ConditionalOnKubernetesEnabled
, has been replaced by the more generic @ConditionalOnCloudPlatform
Spring Boot annotation. Name-based and labels-based secrets and configmaps are now read separately to prevent potential issues. Service discovery with the DiscoveryClient now supports filtering by namespace to prevent exceptions when trying to access restricted namespaces.
Spring Cloud Contract no longer supports Pact out of the box, as Pact, a tool for contract testing, broke the binary and functional compatibility, even with patch versions. The migration guide may be used to upgrade existing applications to the latest version of Spring Cloud Contract.
Spring Cloud OpenFeign is declared feature complete, which means no new features will be added, but security issues and bugs will be resolved and minor pull requests from the community will be considered. Spring Framework introduced the HTTP Interface in version 6.0, which will be used to replace OpenFeign.
Spring Cloud CLI, Spring Cloud Cloudfoundry and Spring Cloud Sleuth are no longer part of the release train.
Spring Cloud 2022.0.0 can be used after adding the following configuration to the Maven POM
file:
org.springframework.cloud
spring-cloud-dependencies
2022.0.0
pom
import
org.springframework.cloud
spring-cloud-starter-config
org.springframework.cloud
spring-cloud-starter-netflix-eureka-client
Alternatively, the following Gradle configuration may be used:
plugins {
id 'java'
id 'org.springframework.boot' version '3.0.0'
id 'io.spring.dependency-management' version '1.1.0'
}
repositories {
mavenCentral()
}
ext {
set('springCloudVersion', "2022.0.0")
}
dependencies {
implementation 'org.springframework.cloud:spring-cloud-starter-config'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
Breaking changes for this release may be found in the Release Notes and feedback may be provided via GitHub, Gitter, Stack Overflow or Twitter.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Aerospike is a real-time data platform company.
In an era when real-time data streaming companies are fond of giving away promo t-shirts emblazoned with phrases like – Is Batch Dead? – (spoiler alert, you’re supposed to say yes, it is) we can dig into most real-time data science tales with comparatively keen ears, arguably i.e. this is a technology very much of the time and of the moment.
So what’s new at Aerospike?
The firm has now come forward with the release of Aerospike Connect for Elasticsearch, which, as happenstance would have it, is an Aerospike connection and integration technology to extend the organisation’s core platform functionalities with Elasticsearch, the free and open source search and analytics technology based upon the Lucene library.
The new connector enables developers and data architects to leverage (Ed: do they mean ‘use’?) Elasticsearch to perform fast full-text-based searches on real-time data stored in Aerospike Database 6, the company’s database.
Aerospike Connect for Elasticsearch comes on the back of what the company has already stated as a commitment to deliver search and analytics capabilities for its data layer.
The new connector enables fast, full-text searches on data – and complements the recently announced Aerospike SQL powered by Starburst product, which allows users to perform large-scale SQL analytics queries on data stored in Aerospike Database 6.
With a list of capabilities, including JSONPath Query support on Aerospike Database 6, Aerospike customers now have a variety of options to choose from to power their search and analytics use cases.
“With enterprises around the world rapidly adopting [real-time data platforms], there is a growing demand for high-speed and highly reliable full-text search capabilities on data stored in Aerospike Database 6,” said Subbu Iyer, CEO of Aerospike. “Aerospike Connect for Elasticsearch unlocks new frontiers of fast, predictably performant full-text search, which is critical to meet our customers’ needs.”
Full-text-search for real-time data
Using Aerospike Connect for Elasticsearch, architects and developers can integrate Aerospike’s SQL database with Elasticsearch to enable a wide range of full-text search-based use cases such as:
- E-commerce: enriched customer experience that increases shopping cart size.
- Customer Support: enhanced self-service and reduced service delivery costs.
- Workplace Applications: unified search across multiple productivity tools.
- Website Experience: faster access to resources and increased site conversions.
- Federal: Smart cities will experience better results in maintaining mission-critical real-time applications.
Aerospike Connect for Elasticsearch adds to the list of connectors into enterprise data pipelines as part of the Aerospike Connect product line. With customers successfully using connectors for Kafka, Pulsar, Spark and Presto-Trino, Aerospike insists its customers now have more choices.